A Google executive claimed Wednesday that Android has seen the fastest and most successful adoption of any operating system in history.
Speaking at the Morgan Stanley Technology, Media and Telecom Conference, Nikesh Arora, senior vice president at Google, said the following, courtesy of Seeking Alpha:
I mean, look, in the history of operating systems, I think Android has been the quickest and most successful adoption of an operating system in the world. So you just sort of stop, take pause and say, oh my God, that’s crazy. Nobody could have ever predicted that we’re going to get an operating system adopted in an industry, which has so many different OEMs, manufacturing with their own operating systems having adopted around the world.
A report back in 2012 claimed that both Android and iOS were growing 10 times faster than PCs did in the 1980s.
And it’s clear that iOS on the iPhone and iPad had blistering adoption rates (with one study, back in 2010, showing iPad had the fastest adoption rate ever).
Also, the adoption rates of iOS upgrades, such as iOS 7, tend to outpace Android.
But recent data from IDC and App Annie (December 2013) show Android, for example, with a big lead over Apple in the installed base of smartphones (see chart at bottom), while Apple leads in game monetization.
And there are plenty of other studies too — usually focusing on smartphones — that show Android leading.
The success of apps on iOS, however, has been a strong suit for Apple, as a recent Piper Jaffray study, released in January, shows.
In the same report, though, Piper Jaffray argued that the quality of apps on the two platforms is now equalizing and that services will now be the key differentiator.
The initial release of Android was in September 2008. iOS made its debut in June 2007.
So, is Google, right? Maybe that’s best left to readers to debate.
“How do I know that the new installed app behaves as described?” asks Andreas Zeller, professor of software engineering at Saarland University. So far experts have identified so-called malicious apps by checking their behavior against patterns of known attacks. “But what if the attack is brand-new?” asks Zeller.
His group seems to have found a new method to answer all these questions. Zeller summarizes the basic idea as follows: “Apps whose functionality is described in the app store should behave accordingly. If that is not the case, they are suspect.”|
His research group has named the software based on this idea “Chabada”. For every app, it analyzes the description of its functionality that can be read in the app store. With methods from natural language processing, it identifies the main topics, for example “music”. After that, Chabada clusters applications by related topics. For instance, the cluster “travel” consists of all apps that deal with traveling in some way. Using program analysis, Chabada detects which data and services are accessed by the apps. Travel apps normally access the current location and a server to load a map. So a travel app secretly sending text messages is suspicious.
The researchers applied this approach on 22,521 apps from the Google Play Store. With a purpose-built script, they had downloaded the 150 most popular apps in the 30 categories from Google Play during spring and winter of last year. Chabada then analyzed them. Finally, the computer scientists from Saarbruecken investigated the 160 most significant outliers to verify Chabada’s selection. The result: It had detected 56 percent of the existing spy apps, without knowing their behavior patterns beforehand.
How important the researchers’ efforts are is shown by a news item published by the Russian software company “Doctor Web” at the end of June last year. It reported that the company had discovered various malicious apps on the “Google Play” platform. Downloaded onto a smartphone, the malware installed other programs, which secretly sent text messages to expensive premium services. Although Doctor Web, according to its own statement, informed Google immediately, the malicious apps were still available for download for several days. Doctor Web estimates that in this way up to 25,000 smartphones were used fraudulently. “In the future Chabada could serve as a kind of gatekeeper, ensuring that malicious apps will never make it into an app store”, Zeller explains.
The computer scientists from Saarbruecken will present their new approach at the International Conference on Software Engineering (ICSE) in Hyderabad, India at the end of May. Already in March, Google security researchers will be meeting with the Saarbruecken team. Google has also already invited Zeller and his colleagues to have Chabada analyze the whole Google App Store.
When you use both Microsoft Outlook 2013 SP1 and Microsoft Exchange Server 2013 SP1, one of the more interesting features you can take advantage of is MAPI over HTTP (code-named alchemy). MAPI over HTTP provides the ability for Messaging API (MAPI) clients and servers to communicate across a HTTP connection without using remote procedure calls (RPCs). Ever since its debut back in 1996, Exchange and its clients have used RPCs. The elimination of the now-aged mechanism marks the conclusion of a modernization process that began more than a decade ago.
The Roots of RPC in Exchange
RPC has a long and noble history. In the early 1990s, Microsoft built on the work of the Open Software Foundation (OSF) to create RPC as an interprocess communications (IPC) mechanism that underpinned client-server applications such as Outlook and Exchange. Applications that use RPCs don’t have to worry about the details of communication across local and remote networks through different protocols because the RPC layer is responsible for this activity.
Related: Exchange Server 2013 SP1: A Mixture of New and Completed Features
In effect, RPCs allow applications to get on with the task of providing their unique functionality instead of having to constantly reinvent the networking wheel, even if occasionally you need to mess around with settings to make everything work. (The method described in “XGEN: Changing the RPC Binding Order” might bring back memories—or nightmares—about some of the fine-tuning required in early Exchange deployments.) In early Exchange deployments, the RPCs connecting Exchange and its clients traveled across many different protocols, including TCP/IP, NetBIOS, and named pipes. Over time, the focus shifted to TCP/IP, which became the de facto standard for Exchange 2000 and later.
The Problems with RPC
Although RPC delivers significant advantages to application developers, it’s an old mechanism that was originally designed to work across LANs rather than across the Internet. The age and relative lack of recent development in the RPC mechanism is reflected in much of its documentation, such as the “How RPC Works” article, which is based on Windows Server 2003 and last updated in March 2003.
Today, more and more of our communications flow over the Internet. The trend to use cloud-based services is just one influence that has driven traffic to the Internet. Mobility is another important influence, as people increasingly take advantage of sophisticated mobile devices and high-speed wireless networks at work and at home. Both influences cause problems for applications that use RPCs.
The first problem is that RPC communications are sensitive to disruptions due to network hiccups (a well-known feature of the Internet). RPCs use fixed buffer sizes, which means that an application like Outlook might have to make multiple calls to a server to retrieve or send information. This situation isn’t improved by the fact that MAPI is a verbose protocol, where the transmission of messages from client to server involves a mass of properties and values.
Outlook does its best to insulate users from network glitches. The introduction of the cached Exchange mode, drizzle mode synchronization, and various optimizations to minimize network consumption in Outlook 2003 provided users with a reliable and robust solution for working when networks weren’t dependable. At that time, the problem was more with dial-up telephone connections than Wi-Fi, but the foundation was set and Outlook has built on it ever since. It’s doubtful that Microsoft could’ve been so successful with Office 365 if Outlook didn’t work so well across the Internet.
However, things are a bit uglier behind the scenes. If network glitches occur, Outlook can end up in a cycle of constant retries to perform common operations such as downloading new items. And the sensitive nature of RPCs mean that Outlook often has to restart activities because a disruption happens, such as moving between two wireless access points. The chatty nature of RPCs, the increasing size of messages, and large attachments (e.g., digital photos, music files, Microsoft PowerPoint presentations) all make for more extended connections. The extended connections, in turn, expose those communications to more disruption. In the end, Outlook consumes many bytes on the wire just to get email updates done.
Microsoft introduced RPC over HTTP connections—aka Outlook Anywhere—to help with this situation. It was first used by Exchange Server 2003 to allow Outlook 2003 to connect to mailboxes without creating a VPN. However, it’s also a fairly old technique that hasn’t really changed much since its introduction in Windows 2000. Plus, Outlook Anywhere poses some unique challenges of its own. For example, HTTP is a half-duplex connection—in other words, it can carry traffic in a single direction. RPCs need full-duplex connections with synchronized inbound (RPC_IN_DATA) and outbound (RPC_OUT_DATA) links. Outlook Anywhere solves the problem by using two HTTP connections to carry the RPC traffic and session affinity to keep the links synchronized with each other. This arrangement is well known to administrators who set up load balancers to deal with Outlook Anywhere connections. Microsoft eased the problem somewhat in Exchange 2013 by moving the responsibility for handling session affinity to the Client Access Server, but the two connections are still used.
Microsoft has a lot of experience with other clients that use HTTP without RPCs. Both Exchange ActiveSync (EAS) and Outlook Web App (OWA) clients use longstanding HTTP connections to communicate with Exchange. These connections don’t require the complicated handshaking used when RPC connections are made, which means that they work much better than Outlook over low-quality links. Mobile devices in particular tend to hop between networks all the time, which creates a challenge in terms of maintaining connectivity with a server. EAS and OWA are both able to manage this kind of environment better than Outlook, which continually loses and restores connections at the expense of a great deal of network activity, most of which is hidden from end users by the cached Exchange mode.
Similar to EAS and OWA, MAPI over HTTP uses a kind of “hanging GET” transaction for Outlook notifications. Simply put, the client establishes a long-lived HTTP GET to the server so that the server knows the client is interested in notifications such as the arrival of new mail. If something happens on the server, the transaction is closed and the client issues a command to fetch the new data. If nothing happens during the GET interval (e.g., 20 minutes), the transaction is closed and a new HTTP GET is established.
Outlook is a more complicated and sophisticated client than an OWA or EAS client (e.g., Mobile Outlook on Windows Phone). As such, you’d expect that Outlook consumes more bandwidth and executes more transactions with Exchange and other servers. For instance, Outlook 2013 fetches social media information from Facebook and LinkedIn, whereas OWA and EAS don’t. More important, when Outlook synchronizes with a mailbox, it downloads full-fidelity copies of items and attachments and usually processes every folder in the mailbox. Some control over the amount of data downloaded is available by restricting Outlook 2013 to downloading a certain window of data. However, Outlook typically performs a complete synchronization, whereas OWA or EAS typically concentrate on just a few folders and perhaps just the last few weeks of information from those folders. In addition, OWA and EAS don’t download attachments until requested by the user.
All this is of little importance when Outlook connects using a high-quality, fast network. It becomes very important when the network experiences frequent drops and reconnects, or when the bandwidth proves insufficient. At this point, an OWA or EAS client becomes a lot more usable in terms of getting work done. I discovered this in a very real sense when travelling in Australia last year, where some of the public Wi-Fi networks are less than good. OWA or EAS worked fine. As for Outlook, let’s just say that I used OWA far more than usual during that trip.
The complexity of troubleshooting Outlook connections with Exchange is also cited as a reason for change. Once multiple layers are involved, it becomes more difficult to trace connections and determine the root cause. Problems with Outlook connecting to Exchange (both on-premises and to Office 365) have been at the top of the support issues list for years, and the developers hope that simplifying the communications layer will make support easier. Tools such as the Microsoft Exchange RPC Extractor are available to help parse network captures and interpret the contents of RPCs, but the output can be difficult to understand (The TechNet Magazine article “How IT Works: Troubleshooting RPC Errors” provides some useful background.) The hope is that it will be easier to make sense of the HTTP traffic in conjunction with other data (e.g., IIS logs) to understand the flow of client connections.
So, in a nutshell, RPC is:
- An aging communication mechanism originally designed for LAN connections that has been extended to struggle with the unique demands of the Internet
- A mechanism that’s delivered to end users through clients that do their best to disguise and hide the underlying problems
- A mechanism that’s difficult to debug and support
This sounds like an opportunity for improvement, which is exactly what the MAPI over HTTP initiative seeks to deliver.
MAPI over HTTP
Although not much has happened in the RPC world for the last decade, the same isn’t true for HTTP. It has received a huge amount of development attention from leaders in the industry, including Microsoft. Apart from bug fixes, not much can be expected for RPC in the future, so if you were a development group, would you put your proverbial eggs in the HTTP or RPC basket? Aside from the inevitable problems involved in making a change to the way products like Outlook and Exchange work, the choice seems pretty obvious if you want to take advantage of new techniques and features that are likely to come along with the HTTP protocol over the next few years.
Figure 1 shows the two modes of communication that Outlook 2013 SP1 can use with Exchange 2013 SP1. On the left, you have the older RPC-style connections over either TCP/IP (for Exchange Server 2010 and earlier) or HTTP Secure (HTTPS—for Exchange 2013). In this instance, RPCs form the middle ground to link the client and server. However, that middle ground creates an extra layer of complexity if a need arises to debug connections. On the right, you can see how MAPI over HTTP replaces the dependency on RPC by directing the remote operations executed by Outlook across HTTP connections. This is different than RPC over HTTP because the RPCs used to carry MAPI instructions (wrapped in TCP/IP or HTTP packets) are no longer used. Instead, the MAPI instructions are sent directly over an HTTP link, which is what most Internet applications use to convey their data.
A change like this involves a great deal of work on both the client and server, which is the reason why it only works when Outlook 2013 SP1 and Exchange 2013 SP1 work together. Remember that RPC is designed to relieve applications from the need to worry about IPC by providing a library of common functions that the applications can use to make connections. For backward compatibility with older clients and servers, Outlook 2013 SP1 and Exchange 2013 SP1 still contain all the RPC code, but they also have a new communications library that can be used to direct connections across an HTTP link.
When Outlook 2013 SP1 connects to an Exchange 2013 SP1 server, it advertises its ability to use MAPI over HTTP by sending an indicator in its initial connection. If MAPI over HTTP is enabled for the organization, Exchange responds by providing a set of URLs in the Autodiscover XML manifest returned to Outlook. The URLs point to the connection points (an IIS virtual directory for MAPI over HTTP) that Outlook can then use to establish the HTTP connection. If Outlook connects to Office 365, it receives only a set of external connection URLs, whereas on-premises Exchange servers (in a hybrid or pure on-premises configuration) transmit both internal and external connection points. Outlook attempts to use the internal connection first, then fails over to the external connection if necessary. Each set contains URLs for the mail store (mailbox databases) and directory, corresponding to the emsmdb and nspi interfaces used by Outlook to access mailboxes and the address book. (The Exchange Team Blog “Understanding how Outlook, CDO, MAPI, and Providers work together” provides some useful information about these interfaces.)
Exchange 2013 supports Outlook 2010 and Outlook 2007 clients, too. Right now, it’s unclear whether the MAPI over HTTP update will be back ported to these clients, so for now, they’ll continue to use RPC over HTTP. Customer demand and business requirements might lead Microsoft to upgrade Outlook 2010. It’s much less likely that Microsoft will invest the engineering effort to upgrade Outlook 2007 given the age of the client. As future clients and servers are delivered over time, Microsoft will remove support for RPC over HTTP in the same way it removed support for RPC over TCP/IP (in Exchange 2013) and UDP (initially in Exchange 2010 and finally in Exchange 2013).
Bumps Along the Road
It all sounds simple, but like many other changes in technology, using MAPI over HTTP requires some planning, testing, and other work on the part of administrators. First, MAPI over HTTP isn’t enabled by default. You have to enable it by updating the organization configuration to let mailbox servers know that it’s OK to tell Outlook clients that MAPI over HTTP is available.
As shown in Figure 2, this is a one-time operation performed by running the Windows PowerShell command:
This shouldn’t be done until the entire organization is upgraded to Exchange 2013 SP1 and you have deployed Outlook 2013 SP1 clients. Office 365 tenants won’t get to vote on when this update happens, as the responsibility for making the change lies in the hands of Microsoft.
Second, switching network communication mechanisms doesn’t come without some pain for clients. Outlook maintains information about its configuration in MAPI profiles. The change to MAPI over HTTP requires that the profiles be refreshed with the new information. Afterward, Outlook has to switch to the new MAPI over HTTP endpoints. The switch can’t be done on the fly, so Outlook has to exit and restart. Users will therefore see the infamous message: The Microsoft Exchange administrator has made a change on the server that requires you to quit and restart Outlook. After Outlook is restarted, it connects to MAPI over HTTP and everything starts to flow as planned.
A side effect of the switchover to MAPI over HTTP is that the Outlook profile page that controls the Outlook Anywhere settings, which Figure 3 shows, will no longer be visible. This is logical because Outlook Anywhere is no longer in use, but you might have to update end-user documentation to reflect the new reality.
If all the servers in the organization run Exchange 2013 SP1 (or later), the change to MAPI profiles should be a one-off affair. However, if you maintain some down-level servers (earlier versions of Exchange 2013, Exchange 2010, or Exchange 2007) and move mailboxes from Exchange 2013 SP1 to those servers, the MAPI profiles for those mailboxes will need a further refresh.
Users might or might not be concerned about having to restart Outlook. If users accept this kind of thing, they’ll restart Outlook as directed. However, if users are worried about it, they might create a Help desk support ticket. Given that the switchover is a one-time organization-wide event, you might have a situation in which hundreds of users return to work after a blissful weekend, resume their PCs from hibernation, see the message, and call the Help desk. Some careful communication and guidance to users is required here.
Third, no real information is available yet about the possible impact on performance (client or server) or network bandwidth. Although RPCs can be fragile, they’re compact. Pure HTTP traffic is likely to generate more bytes on the wire. For instance, the Exchange developers have decided to use HTTP headers to carry diagnostic information in MAPI over HTTP connections. This information is useful in debugging connectivity problems that might arise, but the additional traffic might be a problem for companies that depend on remote connections, such as those with highly mobile users or Office 365 tenants. Microsoft is likely to release more data about bandwidth requirements that compares MAPI over HTTP with RPC over HTTP. This data (or your own data) should be used to determine whether your network connections’ capacity needs to be upgraded. In addition, if you use appliances (e.g., load balancers, WAN accelerators), you should check with the vendors to determine whether any configuration changes should be made to optimize for MAPI over HTTP traffic.
Finally, there’s always the potential for edge cases that aren’t tested during development to pop up soon after an update is shipped to customers. Thinking about what might happen here, you need to check if you have any particular aspect of your environment that Microsoft is unlikely to test. For example, you might have a particular Outlook add-on that might cause problems after the switchover. Given that add-ons depend on Outlook networking, I think that this is an unlikely scenario, but it’s wise to test the entire configuration of your operational environment as thoroughly as possible before enabling such a profound change.
Microsoft believes that the transition to MAPI over HTTP will help Outlook improve its ability to work in a world where mobile communications are an absolute requirement. It makes sense for technology to adapt and embrace new conditions rather than look back to a time when the only available networks were dial-up telephones and safe corporate Ethernets. RPCs have had their day. They’ll remain in use for as long as old clients exist (and are supported), but the writing is firmly on the wall. HTTP-based connections are the lingua franca of the Internet and that’s what Outlook will use in future.
The Right Approach
Installing Exchange 2013 SP1 across an organization won’t force you to switch to MAPI over HTTP. For now, you can continue to use RPC over HTTP until you are ready to switch, most likely sometime in the future when you’ve upgraded all your desktops to Outlook 2013 SP1. It’s possible that there will be a time when using MAPI over HTTP is mandatory, but that’s likely to be well in the future when new versions of Exchange and Outlook are built to use only this mechanism.
I see lots of good things about this transition. It doesn’t seem to make a lot of sense to keep old networking and communication technologies in place when they struggle to deal with modern operating conditions. Developing a new approach that’s designed to cope well with the kind of connections being used today seems like a good idea. Although you can expect some bumps along the road until the inevitable configuration, operational, and programming problems are solved, I hope that the implementation occurs without undue disruption to end users. When it comes to technical evolution, good planning, comprehensive testing, and solid execution seem like the right approach to take.
Imagination Technologies is probably best known for its PowerVR graphics processors featured in Apple’s wildly popular iOS mobile devices, as PowerVR S-series chips have powered every Apple mobile device since the third-generation iPhone 3GS.
This week, the company debuted the PowerVR GX6650, a 192-core GPU at the 2014 Mobile World Congress (MWC) in Barcelona, Spain.
The new mobile GPU design will be embedded in upcoming mobile processors with integrated graphics. It is also likely that the 192-core mobile GPU will also tip up in next-gen Apple products in 2014 and beyond.
“Apple’s custom A-series chips that power the iPhone and iPad use PowerVR graphics processors,” AppleInsider staff explained. ”Apple’s latest flagship silicon, the A7 CPU found in the iPad Air and iPhone 5s, uses Imagination’s PowerVR Series 6 graphics.”
According to AI, Imagination has been talking up the new PowerVR GX6650 GPU as the most powerful GPU IP core available on the market today – besting even Nvidia’s upcoming Tegra K1 platform. The new high-end mobile graphics chip is targeted at processors in high-end, high-resolution tablets and 4K smart TVs.
The PowerVR GX6650 boasts 6 unified shading clusters and 192 cores, allowing it to process 12 pixels per clock.
Not surprisingly, power consumption on the latest PowerVR has been kept low, even with the additional performance boost. Power sipping is managed by the PowerGearing G6XT included in the GPU, while the PVR3C is tasked with optimally compressing textures, frame buffers and geometry.
The study was published the week of February 10–14 in the online edition of the Proceedings of the National Academy of Sciences. The work is the result of a five-year effort by researchers in the laboratory of Amnon Yariv, Martin and Eileen Summerfield Professor of Applied Physics and professor of electrical engineering; the project was led by postdoctoral scholar Christos Santis (PhD ’13) and graduate student Scott Steger.
Light is capable of carrying vast amounts of information—approximately 10,000 times more bandwidth than microwaves, the earlier carrier of long-distance communications. But to utilize this potential, the laser light needs to be as spectrally pure—as close to a single frequency—as possible. The purer the tone, the more information it can carry, and for decades researchers have been trying to develop a laser that comes as close as possible to emitting just one frequency.
Today’s worldwide optical-fiber network is still powered by a laser known as the distributed-feedback semiconductor (S-DFB) laser, developed in the mid 1970s in Yariv’s research group. The S-DFB laser’s unusual longevity in optical communications stemmed from its, at the time, unparalleled spectral purity—the degree to which the light emitted matched a single frequency. The laser’s increased spectral purity directly translated into a larger information bandwidth of the laser beam and longer possible transmission distances in the optical fiber—with the result that more information could be carried farther and faster than ever before.
At the time, this unprecedented spectral purity was a direct consequence of the incorporation of a nanoscale corrugation within the multilayered structure of the laser. The washboard-like surface acted as a sort of internal filter, discriminating against spurious “noisy” waves contaminating the ideal wave frequency. Although the old S-DFB laser had a successful 40-year run in optical communications—and was cited as the main reason for Yariv receiving the 2010 National Medal of Science—the spectral purity, or coherence, of the laser no longer satisfies the ever-increasing demand for bandwidth.
“What became the prime motivator for our project was that the present-day laser designs—even our S-DFB laser—have an internal architecture which is unfavorable for high spectral-purity operation. This is because they allow a large and theoretically unavoidable optical noise to comingle with the coherent laser and thus degrade its spectral purity,” he says.
The old S-DFB laser consists of continuous crystalline layers of materials called III-V semiconductors—typically gallium arsenide and indium phosphide—that convert into light the applied electrical current flowing through the structure. Once generated, the light is stored within the same material. Since III-V semiconductors are also strong light absorbers—and this absorption leads to a degradation of spectral purity—the researchers sought a different solution for the new laser.
The high-coherence new laser still converts current to light using the III-V material, but in a fundamental departure from the S-DFB laser, it stores the light in a layer of silicon, which does not absorb light. Spatial patterning of this silicon layer—a variant of the corrugated surface of the S-DFB laser—causes the silicon to act as a light concentrator, pulling the newly generated light away from the light-absorbing III-V material and into the near absorption-free silicon.
This newly achieved high spectral purity—a 20 times narrower range of frequencies than possible with the S-DFB laser—could be especially important for the future of fiber-optic communications. Originally, laser beams in optic fibers carried information in pulses of light; data signals were impressed on the beam by rapidly turning the laser on and off, and the resulting light pulses were carried through the optic fibers. However, to meet the increasing demand for bandwidth, communications system engineers are now adopting a new method of impressing the data on laser beams that no longer requires this “on-off” technique. This method is called coherent phase communication.
In coherent phase communications, the data resides in small delays in the arrival time of the waves; the delays—a tiny fraction (10-16) of a second in duration—can then accurately relay the information even over thousands of miles. The digital electronic bits carrying video, data, or other information are converted at the laser into these small delays in the otherwise rock-steady light wave. But the number of possible delays, and thus the data-carrying capacity of the channel, is fundamentally limited by the degree of spectral purity of the laser beam. This purity can never be absolute—a limitation of the laws of physics—but with the new laser, Yariv and his team have tried to come as close to absolute purity as is possible.
These findings were published in a paper titled, “High-coherence semiconductor lasers based on integral high-Q resonators in hybrid Si/III-V platforms.” In addition to Yariv, Santis, and Steger, other Caltech coauthors include graduate student Yaakov Vilenchik, and former graduate student Arseny Vasilyev (PhD, ’13). The work was funded by the Army Research Office, the National Science Foundation, and the Defense Advanced Research Projects Agency. The lasers were fabricated at the Kavli Nanoscience Institute at Caltech.
With 450 million monthly users and a million more signing up each day, WhatsApp was just too far ahead in the international mobile messaging race for Facebook to catch up, as you can see in the chart above that we made last year. Facebook either had to surrender the linchpin to mobile social networking abroad, or pony up and acquire WhatsApp before it got any bigger. It chose the latter.
Facebook recently said on its earnings call a few weeks ago that its November relaunch of Messenger led to a 70 percent increase in usage, with many more messages being sent. But much of that was likely in the United States and Canada where the standalone messaging app war is still to be won.
Internationally, Facebook was late to the Messenger party. It didn’t launch until 2011 after Facebook bought Beluga, and at the time it was centered around group messaging where SMS was especially weak.
WhatsApp launched in 2009 with the right focus on a lean, clean, and fast mobile messaging app. And while the international messaging market is incredibly fragmented, it was able to gain a major presence where Messenger didn’t as you can also see in the chart above.
Unlike PC-based social networking, there is no outstanding market leader in mobile messaging. Still, WhatsApp absolutely dominates in markets outside of the U.S. like Europe and India.
[Update: WhatsApp was much more popular than Facebook in several large developing markets, according to data from a small survey conducted by Jana Mobile and published by The Information (paywalled). In India, Brazil, and Mexico, respondees were 12X to 64X more likely to say WhatsApp is their most used messaging app, compared to Facebook. Those are big countries with tons of users that Facebook needs.]
It’s also impossible for Facebook to acquire certain other Asian competitors like WeChat, which is the one hope of Chinese mega-giant Tencent to have a global consumer product.
So it’s clear that WhatsApp had strategic interest to Facebook, and we know that the two talked from time to time.
We made the map above using data from Onavo, another Israeli-based company that Facebook acquired for — ahem — competitive intelligence. Because Facebook scooped up Onavo for more than $100 million in October, we don’t have access to active usage data anymore. The only thing outsiders can see are app store rankings, which imply download rates and not current usage.
So what happened in the last year? WhatsApp looks to have pulled so far ahead of Facebook in developing markets that there was no way to catch up. Mark Zuckerberg said in a post today that the app was on its way to reaching 1 billion users.
We’ve heard Facebook has been interested in buying WhatsApp for two to three years. We reported in 2012 that Facebook was in talks to acquire WhatsApp. But over the past year, it became clear that Facebook couldn’t afford not to pay whatever it would take to get WhatsApp on its team.
So the answer to Facebook’s problem ended up being $19 billion.
Apparently, that’s what it took to take Jan Koum and his backers at Sequoia Capital (the fund that Zuck originally spited) out of the market. If it waited any longer, that number probably would have just gotten bigger.
You might wonder how WhatsApp will ever earn back the money it cost to buy, but this acquisition wasn’t about increasing Facebook’s total revenue. It was about surviving the global shift to mobile.
Facebook kembali melakukan langkah akuisisi mengejutkan. Setelah sebelumnya mencaplok Instagram, kini raksasa jejaring sosial itu membeli WhatsApp senilai USD 19 miliar atau sekitar Rp 209 triliun (USD 1 = Rp 11.000).
Seperti dilansir Wall Street Journal, angka USD 19 miliar itu tentu saja tidak semuanya dalam bentuk cash. Melainkan terdiri dari USD 12 miliar dalam bentuk saham Facebook, USD 4 miliar cash, serta tambahan USD 3 miliar berupa saham terbatas yang diberikan untuk pendiri WhatsApp dan sejumlah karyawannya.
WhatsApp, si aplikasi pengirim pesan lintas platform, saat ini melayani 450 juta pengguna setiap bulannya. Dimana sekitar 70% di antaranya termasuk dalam pengguna aktif.
Jika dihitung per hari, dilaporkan ada lebih dari 1 juta pendaftar baru di layanan WhatApp.
Meski sudah diakuisisi, Facebook berjanji, WhatsApp bakal tetap beroperasi secara independen dan menggunakan brand yang sudah dirintisnya.
Sementara CEO dan pendiri WhatsApp Jan Koum akan merapat ke jajaran dewan direksi Facebook.
Dengan dibelinya WhatsApp, tentu saja Facebook semakin meraksasa. Sebelumnya, situs yang dirintis Mark Zuckerberg itu telah mengakuisisi Instagram senilai USD 1 miliar pada tahun 2012.
Kabar mengejutkan diungkapkan mantan kontraktor rekanan National Security Agency (NSA) Amerika Serikat (AS) Edward Snowden. Menurut Snowden, intelijen Australia memanfaatkan 2 operator telepon seluler terbesar Indonesia demi memuluskan aksi penyadapan yang dilakukan Australia dan juga AS.
Pria yang kerap kali membocorkan rahasia intelijen AS itu, sebagaimana dilansir The New York Times, Sabtu (15/2/2014), memaparkan lembaga intelijen AS turut terlibat dalam penyadapan yang dilakukan oleh pemeritah Australia. Ia juga mengatakan dalam menyadap komunikasi di Indonesia, pemerintah Australia dan AS telah melibatkan 2 operator seluler terbesar di Indonesia, yakni operator yang mendominasi corporate colour dengan warna merah dan corporate colour warna kuning.
The New York Times yang dikutip Sydney Morning Herald dan Guardian, memaparkan data terbaru Snowden itu menyebutkan bahwa ke-2 operator telepon seluler terbesar di Indonesia itu dilibatkan untuk mengumpulkan data yang mereka inginkan. Incaran terbesar Australia dalam penyadapan itu adalah tokoh besar Indonesia dan tersangka teroris yang kerap beraksi.
Diungkapkan oleh media massa bergengsi AS, Australia dan Inggris tersebut tentang rincian yang berisi cara Australia Signal Directorate (ASD) menawari badan pengawasan AS dan kantor hukum AS dalam skandal penyadapan itu. Dokumen-dokumen tersebut menunjukkan kerja sama yang terjalin NSA dan ASD untuk kali pertama mengungkap akses komprehensif sistem komunikasi nasional Indonesia.
Menurut sebuah dokumen NSA tahun 2012, ASD telah berhasil mengakses data panggilan dari Indosat sebagai operator komunikasi satelit domestik di Indonesia. Data yang disadap tersebut termasuk data pejabat Indonesia di berbagai departemen pemerintah.
Tak tanggung-tanggung berdasarkan dokumen dari tahun 2013 lalu menyatakan ASD telah memperoleh hampir 1,8 juta kunci utama enkripsi yang digunakan untuk melindungi komunikasi rahasia dari jaringan Telkomsel dan mengembangkan sebuah cara untuk mendeskripsikan sandi secara keseluruhan.
Menurur bocoran tersebut Intelejen Australia telah mengintai Indonesia sejak bom Bali di tahun 2002 yang telah menewaskan 202 orang, termasuk 88 warga Australia. Selain Indonesia, penyadapan tersebut juga menyasar beberapa negara di Asia, termasuk China.
[New York Times]
If Gartner’s recent poll of NoSQL database adopters is any indication, traditional IT is dead. Not just a little bit dead. Dead dead.
According to the Gartner poll, a scant 5.5% of NoSQL users identified themselves as DBAs that run their businesses operating on those storage systems. The survey was small, but it might point to a larger trend: Do-it-yourself (DIY) IT, or DevOps.
DevOps is sometimes characterized as developers reigning over operations, but that’s not really the case. Rather, as Mike Loukides suggests, “Operations doesn’t go away, it becomes part of the development.” Application developers, increasingly running in cloud environments, take on more traditional operations responsibilities with Ops becoming part of the application.
It’s catching on. As Microsoft’s Tim Park declares, DevOps is “the new normal” given that “infrastructure is too complex now to manage with humans,” requiring “automation of everything.”
The numbers agree. Since 2011, DevOps adoption has increased 26%, according to a 2013 survey by Puppet Labs. The rise in DevOps also translates into the ability to ship code 30X faster. All of which is expressed in a separate CA Technologies survey of senior IT decision-makers, which found that improvements to the customer experience are by far the biggest reasons enterprises are embracing DevOps.
So what happens now to the traditional IT Operations professional?
Who Is Running This Stuff?
The answer is, of course, that it’s unclear. But looking at Gartner’s data, the numbers don’t bode well for traditional operations:
Commenting on the data, Gartner analyst Nick Heudecker notes: “DBAs simply aren’t a part of the NoSQL conversation. This means DBAs, intentionally or not, are being eliminated from a rapidly growing area of information management.” While Heudecker is talking specifically about DBAs and NoSQL databases, this same trend is playing out across the IT spectrum.
Developers are the new kingmakers, as Redmonk’s Stephen O’Grady reminds us. They do Ops differently.
Not that this is without problems. In my experience, developers are often unprepared or unwilling to take on the burden of managing their applications in production. Trained for years on the idea that they could build an application and dump it on Operations to manage, developers are discovering that the “Ops” in DevOps is real, and sometimes painful.
Heudecker captures this concern: ”Application developers may be getting what they want from NoSQL now, but cutting out the primary data stewards will result in long-term data quality and information governance challenges for the larger enterprise.”
Ops By Another Name
Such issues will need to be tackled by the rising generation of DevOps professionals. But let’s be clear: It’s too late to go back to the old way of managing IT. CSC’s Simon Wardley posits that we’re well into a “Next Generation” approach to IT, one that elevates developers and significantly changes the role of traditional IT Ops. Given the crushing need for development speed, there simply is no other way.
Many business owners and entrepreneurs struggle with whether they should design a responsive website that works across devices or focus exclusively on building a native mobile app.
It’s a difficult choice to make since both options present advantages and disadvantages that must be taken into consideration when moving forward.
As of last year, apps from retail businesses took up to 27 percent of consumer’s time, which sheds lights on how critical a mobile app can be to reaching your customers where they are active online. At the same time, 67 percent of consumers say they are more likely to purchase from a mobile-friendly website than they are from a website not optimized for devices other than desktop.
It’s a tough call to make when deciding between responsive design or an app, but in the end, it depends on the goals of your business.
If your company can afford it, it’s highly recommended that you build both a responsive site and a native mobile app in order to help your business work towards capturing the attention of your entire mobile audience. The native mobile app will provide a mobile centric experience for your existing and most loyal customers, while your responsive website can help provide an optimized experience to new and old visitors browsing your website or discovering it for the very first time.
For example, popular ecommerce brand Nasty Gal has a responsive website and a mobile app to help provide the best experience for its shoppers however they wish to shop the brand’s products.
Most companies can’t afford to do both, which is why it’s important to understand the advantages of both options when addressing your company’s mobile priorities.
Responsive design isn’t a cure-all
Responsive Web design is certainly the most affordable option for your business as compared to the development of a mobile app. Take into consideration the initial costs of redesigning your website to be mobile friendly, then the cost of occasional upkeep and upgrades.
If visibility in the search engines is an increasingly important part of your strategy to grow your business, then a responsive website is critical in helping grow traffic to your website. A mobile app lives in a closed environment and cannot be indexed by the search engines, which requires driving traffic to this app through alternate methods.
Depending on your designer and the size of your website, a responsive Web design often takes far less time to create then does a mobile app since there’s no app store approval or extensive guidelines to follow as compared to what Google Play, the Apple app store and the Windows Phone app store require for launching an app.
If the goal of your destination online is to be universally accessible from any device, then responsive design is the solution. A mobile app is designed for a unique experience; exclusive to the operating system it lives on, which means it isn’t a one size fits all fix.
However, don’t think of responsive design as the easy way out when it comes to optimizing your website across mobile devices. Although a responsive website optimizes your experience, it doesn’t incorporate all the smart phone features like the camera or GPS that a native mobile app can.
A mobile app will provide users with unique functionality and speed that can’t be achieve with a responsive website, but can be experienced on the operating system you choose to design your app on.
It’s better than not having a mobile-friendly version of your website, but it’s not the finally solution for your customer’s experience with your business on mobile. Again, the choice between responsive and a mobile app depends on what your goals are for mobile.
Consult analytics to inform your native mobile app
A mobile app offers a compelling, unique and mobile specific experience for your customers, which is one of the main reasons why your company should consider designing an app over worrying about making your existing website mobile-friendly.
First and foremost, if you have existing data to analyze than it is important to use your analytics tools like Google Analytics or Omniture to see what mobile devices are used the most to visit your website in the past few months. This can help inform what operating system you decide to design your app on.
Whether you decide to go with iOS, Android, Windows Phone or another less popular operating system, it’s essential to match the features of the operating system with the type of app you’re looking to create whether it’s an ecommerce store, a content focused website etc.
Besides being able to utilize more of the features incorporated in a mobile device into the experience, a mobile app often has access to more data from a user and therefore, can provide a more personalized experience.
This personalization through data could play out in the types of push notifications an app sends you, product recommendations, suggested content to view or other specific user-driven actions. When a user makes a profile on an app, it makes gathering data about a person and their online habits much easier for a business and much quicker and smoother for the user continually using this app to shop, find events to attend, listen to music and perform other tasks.
As of now, a native mobile app offers the best user experience for a person on a mobile device since there are still limitations to how HTML 5 can be parsed on mobile.
As the complexity of the responsive website increases, the more likely the user experience will begin to suffer. A native mobile app offers the best user experience to your audience, taking advantage of the phone’s functions and the expectations of customers using these devices.
Lastly, in-app purchasing drives 76 percent of all app marketplace revenues to date since once it is setup, it’s particularly easy for users to make a purchase with pre-entered credit card information.
This is best suited if your app will offer micro-purchases, which our low price point products or services within the app, like buying virtual goods, membership to the premium version of the app or access to additional content.
Google Search :)
- March 2014
- February 2014
- January 2014
- December 2013
- November 2013
- October 2013
- September 2013
- August 2013
- July 2013
- June 2013
- May 2013
- April 2013
- March 2013
- February 2013
- January 2013
- December 2012
- November 2012
- October 2012
- September 2012
- August 2012
- July 2012
- June 2012
- May 2012
- April 2012
- March 2012
- February 2012
- January 2012
- December 2011
- November 2011
- October 2011
- September 2011
- August 2011
- July 2011
- June 2011
- May 2011
- April 2011
- March 2011
- February 2011
- January 2011
- December 2010
- November 2010
- October 2010
- September 2010
- August 2010
- July 2010
- June 2010
- May 2010
- April 2010
- March 2010
- Android : OS mobile tersukses sepanjang sejarah
- How to identify malicious Android apps on Google Play
- Exchange Server 2013 Transitions from RPC to HTTP
- Imagination Tech debuts 192-core mobile GPU
- A new laser for a faster Internet
- Why Facebook Dropped $19B On WhatsApp: Reach Into Europe, Emerging Markets
- WhatsApp dibeli Facebook Rp 209 Triliun
- Australia-AS Berbagi Akses Sadap Telkomsel dan Indosat
- DevOps: The Future Of DIY IT ?
- How to decide between a responsive website or a native mobile app