While both Google and SAP shared a 1980′s music sensibility at their respective conferences this week – Billy Idol performed at Google I/O and U2′s Bono walked the floor at SAPPHIRE – the two companies see the future of computing very differently. Even when the two companies agree on the importance of cloud computing, their strategies couldn’t be more different.
For one thing, SAP’s new cloud isn’t even a cloud. But then, SAP’s Bono wasn’t really Bono, either, but merely an impersonator.
Forrester analyst Stefan Ried takes SAP to task for getting cloud wrong in its new HANA Enterprise Cloud:
“The Hana Cloud is a very careful move to a new business model. It is not disruptive and will NOT accelerate Hana usage to the many more customers who have been struggling with Hana on-premises because of its licensing.
“The announced Hana Enterprise Cloud follows the ‘Bring Your Own License’ paradigm. While this is great for customers that already have a Hana license and would like to relocate it into the cloud, it is useless for customers that might have largely fluctuating data volumes or user numbers and might specifically use a cloud because of its elastic business model.”
In other words, it’s not really a cloud.
Amazon, more than any other cloud vendor, has insisted that such “clouds” don’t deserve the name, as they fail to live up to the very premise of cloud computing: truly elastic, on-demand software. But while Amazon normally reserves its ire for private cloud vendors, SAP’s HANA Cloud is even less of a cloud because it requires you to bring your own HANA license to the party.
Meanwhile, over at Google I/O, Google introduced improves to Google Cloud Platform and made Google Compute Engine available to all. Like Amazon, Google is making a powerful array of infrastructure technologies available on-demand, and totally elastic.
Google, like Amazon, realizes that the future of computing is not going to be won by the vendor with the prettiest device or even the best user interface: it will be won by the company with the best cloud services. As Redmonk analyst Stephen O’Grady pointed out, summarizing Google’s first day announcements:
“[Google is clearly telegraphing that] the war for mobile will not be won with devices or operating systems. It will be won instead with services.”
SAP must see this, too, but appears hamstrung by its past, in true “Innovator’s Dilemma” fashion. It has so much revenue tied up in legacy deployments of legacy software that even releasing a kind-of, sort-of, not-really cloud offering is the best it can do.
This is not to suggest that HANA is bad technology. By most accounts, it’s quite good. But as Ried argues, “The SAP Hana Enterprise Cloud is version 2 of the initial Hana in-memory database, but the cloud offering based on ‘Bring Your Own License’ is more version 0.1 of a cloud business model.”
Which is to say, it’s no cloud at all. While this may not seem like a big deal, enterprises are barreling into true clouds for a wide variety of needs, and no longer merely for development and test workloads. If SAP wants to participate in the future of enterprise computing, it should learn from the companies that are inventing that future: Google and Amazon.
The next “space race” might be the race to develop a synthetic model of the human brain – one that Google and Microsoft will participate in, if a report is true.
And instead of trying to beat the Russians, this time the Americans will be racing against the Europeans, who have already announced their plans.
The New York Times reported Monday that the Obama Administration is close to announcing the Brain Activity Map, which scientists quoted by the paper say could be on the scale of the The Human Genome Project, a $3.8 billion project to map the human genome that, the Times reported, returned $800 billion in jobs and other benefits.
The Brain Activity Map would attempt to document how the brain works, from the tiniest neurons up through how possibly the different regions of the brain communicate with one another. If the project succeeds, the Brain Activity Map might give us an understanding of how the human brain “computes” data through its complex web of neurons. It might also help scientists solve brain-related diseases like Alzheimer’s.
Modelling the human brain, and figuring out how it works, has long been one of the Holy Grails of supercomputing, prompting fears of a “technological singularity,” where successively advanced artificial intelligences design ever more refined versions of themselves, leading to a future where humans become increasingly irrelevant.
On a more realistic scale, learning how people think could allow services to begin anticipating their needs, a problem companies like Google and Microsoft would be interested in solving. The Times reported that a Jan. 17 meeting at CalTech was attended by the National Institutes of Health (NIH), the Defense Advanced Research Projects Agency and National Science Foundation, plus Google, Microsoft, and Qualcomm.
Google representatives did not return an emailed request for comment, possibly because of the U.S. President’s Day holiday. A Microsoft Research representative said that the company declined to comment.
Two of the foundations of the Times report were public statements: a tweet by NIH director Francis S. Collins, and a mention of the efforts to map the brain by President Obama in his State of the Union address:
“Every dollar we invested to map the human genome returned $140 to our economy,” Obama said, according to a transcript of the speech. “Every dollar. Today, our scientists are mapping the human brain to unlock the answers to Alzheimer’s. We’re developing drugs to regenerate damaged organs, devising new materials to make batteries 10 times more powerful. Now is not the time to gut these job-creating investments in science and innovation. Now is the time to reach a level of research and development not seen since the height of the space race. We need to make those investments.”
Collins then tweeted: “Obama mentions the #NIH Brain Activity Map in #SOTU”.
The Other Horses in the Race: the EU
Funding for the U.S. effort could last as long as 10 years, and possibly top $3 billion over that time. But the bar was set earlier by a massive collaboration among more than 80 European research agencies, which won an award from the EU of one billion euros ($1.34 billion) to develop a computer simulation of the human brain, known as The Human Brain Project.
That will partly cover the intriguingly named “Neuropolis,” a building dedicated to ”in silico life science” that will serve, at least in part, as the computer infrastructure behind the effort. The Swiss Confederation, the Rolex Group, and various third-party sponsors are backing this part of the effort.
“The HBP will build new platforms for “neuromorphic computing” and “neurorobotics,” enabling researchers to develop “new computing systems and robots based on the architecture and circuitry of the brain,” according to the The Human Brain Project.
Other Horses: IBM’s/DARPA SYNAPSE
The Defense Advanced Research Projects Agency, responsible for the initial funding and challenges to design self-driving cars and other public-private partnerships, has worked with IBM to develop SYNAPSE (Systems of Neuromorphic Adaptive Plastic Scalable Electronics), whose ultimate goal is to build a “build a cognitive computing architecture with 1010 and 106 synapses” – not a biologically realistic simulation of the human brain, but one where computation (“neurons”), memory (“synapses”), and communication (“axons,” “dendrites”) are mathematically abstracted away from biological detail.
A Network of Neurosynaptic Cores Derived from Long-distance Wiring in the Monkey Brain: Each brain-inspired region is symbolically represented by a picture of IBM’s SyNAPSE Phase 1 neuro-synaptic core. Arcs are colored gold to symbolize wiring on a chip. (Source: Dharmendra S Modha)
Using 96 Blue Gene/Q racks at the Lawrence Livermore National Laboratory, the most powerful supercomputer in the world, the team achieved 2.084 billion neurosynaptic cores containing 5310 neurons and 1.37×1014 synapses, according to the blog of Dharmendra Mohda, the leader of IBM’s Cognitive Computing division. That’s only 1,542 times slower than real time.
IBM assembled its diagram of the interconnections inside the cerebral cortex of the macaque, a small monkey, as an early model of how the brain works.
IBM’s Watson, of course, is another example of how a computer can interact with humans, absorbing the reams of unstructured data and winning Jeopardy, among other things.
Google itself last year sat down to try and develop its own neural network, and then presented it with data from its own network. The result, as was somewhat widely publicized, was that the network ended up constructing an internal image of a cat, and then spent its computational efforts deciding which YouTube videos were and were not cats.
So how could Google or Microsoft benefit from a federal partnership? On the surface, they might receive federal funding for research. Cognitive computing on the order of what IBM is hoping to achieve, for example, can take millions and millions of dollars, even if the computing resources are already available. (The Times reported that the CalTech meeting was designed to determine if sufficient computing resources were indeed available; the answer is yes, the paper reported.)
Thinking the way that humans think would allow Google or Microsoft to anticipate even more what their users want, and to provide them with that data. Both companies can do that to some extent through data accumulated from millions of users; if the most common “t” word I search for is Twitter.com, Google can start pre-loading the page in the background. But thinking like a human thinks, and making the seemingly random associations that humans make thousands of times faster than we make, could mean everything from artificially-crafted memes to pre-processed sound bites for politicians.
Jumat (23/11/2010), Google Street View resmi hadir di Indonesia. Untuk mengumpulkan data, dibutuhkan mobil yang dilengkapi dengan kamera panorama. Seperti apa bentuknya?
“Mobil-mobil ini dilengkapi dengan kamera untuk mengambil data,” kata Product Manager Google, Andrew McGlinchey dalam acara peluncuran Google Maps Street View di Cow Feed, Epicentrum Walk, Rasuna Said, Jakarta.
Google Street View bukanlah tampilan real time melainkan data foto. Informasi foto ini didapat dari petugas yang berkeliling kota untuk mengumpulkan foto. Foto ini berbentuk panorama dengan pemandangan 360 derajat.
Hari ini, mobil-mobil tersebut mulai wara-wiri untuk mengambil data di Jakarta. Mobilnya cukup unik karena dipenuhi dengan pernak-pernik ala Google Street view. Mobil berjenis SUV dan sedan ini digambar Google dengan mayoritas warna putih, kuning, hijau dan biru.
Di bagian atasnya, ada alat penting untuk mengumpulkan data yaitu kamera. Seperti menara dengan ujung lingkaran, kamera-kamera ini akan menangkap gambar di setiap meternya. Ada banyak kamera di dalam bola. Nantinya, gambar-gambar ini disatukan dan menjadi sebuah gambar panorama.
Mobil yang baru berangkat dari Epicentrum Walk ini rencananya akan berkeliling Jakarta untuk mengumpulkan foto. Setelah itu baru menyusul kota lainnya.
Google makes more money from advertising than all U.S. print publications combined, according to a new study from German statistics company Statista. The company found Google generated $10.9 billion in ad revenue in the first six months of 2012, while the whole U.S. print media industry — newspapers and magazines — made only $10.5 billion.
Statista did note, however, that the comparison is “obviously unfair” and shouldn’t be judged scientifically. Google operates globally, while the company only looked at print media in the U.S.
Still, it’s a pretty interesting indication of where the print industry is going. The rise of the Web and the fall of print have been well documented, but Statista’s chart makes this trend pretty evident. Several years ago, the print sector’s ad revenue dwarfed Google’s results. But print ad revenue has been falling pretty steadily since about 2006. Google, on the other hand, has seen the opposite effect over the past several years.
However, Google still faces some concerns of its own. The company last month reported somewhat lackluster results in its core business during the third quarter. Among the disappointments was a 15 percent year-over-year drop in advertising cost-per-click, the figure that measures the average amount advertisers paid Google for each time someone clicked on an ad. A 15 percent decline is pretty steep and is worse than analysts were expecting.
Google’s Gmail is top dog in e-mail, according to ComScore, which tracks Web site traffic.
In data for October released by ComScore, Gmail saw 287.9 million unique worldwide visitors during the period, edging out Microsoft’s Hotmail, which finished with 286.2 million unique visitors.
In third place was Yahoo, the once mighty e-mail power, with 281.7 million, according to ComScore. Yahoo, however, holds a comfortable lead in the United States with 76.7 million, compared to second-place Gmail with 69.1 million and third-place Hotmail with 35.5 million.
The data appears to end any dispute about Gmail’s dominance in the global market. Last summer, Google claimed to be the largest based on its own internal numbers but the company’s assertion wasn’t backed up by third-party data.
Google is rolling out ultra-high speed fiber in Kansas City. According to Mountain View rep Milo Medin, the service is 100 times faster than today’s average broadband.
“No more buffering. No more loading. No more waiting. Gigabit speeds will get rid of these pesky, archaic problems and open up new opportunities for the web,” Medin wrote in an official blog post.
“Imagine: instantaneous sharing; truly global education; medical appointments with 3D imaging; even new industries that we haven’t even dreamed of, powered by a gig.”
Google has divided Kansas City into small communities dubbed “fiberhoods.” To take advantage of the fiber roll out, residents are required to pre-register for the uber-fast Internet service. The fiberhoods with the highest pre-registration percentage will get Google Fiber first.
The cost? $70 per subscriber, per month. Those interested in snapping up a combined Internet + cable package will be charged $120 a month, while a slower Internet option with no monthly fee will be available for subscribers who pay a hefty $300 installation fee.
“[Clearly], the Internet is not as fast as it should be. While high speed technology exists, the average Internet speed in the U.S. is still only 5.8 megabits per second (Mbps)—slightly faster than the maximum speed available 16 years ago when residential broadband was first introduced,” Medin noted.
“Access speeds have simply not kept pace with the phenomenal increases in computing power and storage capacity that’s spurred innovation over the last decade, and that’s a challenge we’re excited to work on.”
Google is showcasing Jelly Bean, the latest iteration (4.1) of its Android operating system for tablets and smartphones at I/O 2012 in San Francisco, California.
As expected, Jelly Bean includes a number of sweet new features including a slick redesigned keyboard, offline voice dictation, 18 new input languages, audio cues for blind users, support for Braille and optimized search integration.
Version 4.1 of Android also boasts critical NFC updates, along with a new tap-to-air feature for Bluetooth speakers and the ability to transmit photos by tapping. Plus, the revamped OS is now capable of automatically arranging onscreen icons around new widgets and apps.
And what about all those unwanted apps and widgets? Well, with Jelly Bean, they can be deleted with a single, easy swipe.
In addition to the above-mentioned features, Mountain View also confirmed the existence of Project Butter, an initiative to significantly improve Android’s performance. Under the auspices of Butter, CPU and GPU run smartly in parallel, achieving speeds of 60FPS with triple-buffered graphics.
Fortunately, we don’t have long to wait for Jelly Bean. An SDK for devs is available today, while an OTA update is slated to roll out for the Samsung Galaxy Nexus, Nexus S and Motorola Xoom in mid-July.
Offline Google Maps to work on ‘all devices with Android 2.2 or higher,’ 3D compatibility less clear
We’d heard earlier that Google had “nothing to announce” in regard to Android compatibility with the newly-announced offline Maps support and 3D modeling, but look — things change. We reached out to the company and urged ‘em to dig a little deeper, only to have the following confirmed: “For offline Google Maps for Android, all devices with Android 2.2 (Froyo) and above will be supported.” As for the 3D portion? “We’ll have more details about device compatibility for 3D imagery on Google Earth for mobile at launch.” After the event, we spotted a Googler using the 3D build on a Galaxy Nexus, so it’s obvious that Android 4.0+ will be supported, but we have to assume that some of these older Froyo devices may simply lack the proper oomph needed to fly around the downtowns of [insert major metropolitan area here].
Google just added a small but interesting new feature to its Google Analytics product. You can now see how much of your site your visitors are really seeing based on the new browser-size analysis the company just added to Google Analytics. With Analytics, Google already knows what screen sizes your site’s visitors are using, so it is now combining this information with its previously released browser size tool from Google Labs. Google is rolling this new tool out slowly, so chances are it will be a week or two before you see it in your Google Analytics account (it’s already live in my personal accounts, but your mileage may vary).
Once it is live in your account, just head to the Content section of Google Analytics and look for In-Page Analytics. There, Browser Size is now among the existing options to see click-through percentages on your site.
As Google notes, thanks to the plethora of mobile devices with different screen sizes, the days where your visitors just used a few standard screen sizes are long over. Given the size of modern desktop screens, you can’t even draw any real conclusions from your users’ screen sizes anymore either because “for many people, the visible portion of the web page is much smaller than the screen resolution, because of excessive toolbars and other clutter.” Conversion rates, however, are greatly affected by what your visitors see on your pages without having to scroll.
It seems that Google has started rolling out the structured data search update that it talked about a few months back, or at least is testing it on a large scale. Several types of searches, those with immediate answers, will not get an info box to the right of the search results, similar to the Google+ box, with information about the query as well as additional links.
The type of information varies with the query, but the box shows up for a large variety of searches, from questions like “what’s the tallest building in the world,” to actors and historical figures, to sports teams.
Wikipedia is almost always present in the info box, with a link pointing to the query’s, be it an actor, a landmark, or a band, Wikipedia page.
But the info box provides more than just a snippet of text from Wikipedia and a link, that’s what regular search results do, it also lists all sorts of data about the query, date and place of birth for famous people, height and date of construction for buildings, singles and albums for bands and so on.
What’s interesting is that Google doesn’t seem to be pointing to its other services all that much with the info box. The links are to the Wikipedia pages or to new Google searches, with the exception of Google Maps links.
Google has been working on utilizing structured data to more effect for a long while now. Google Squared was a search engine designed specifically for structured data. But it’s a tough problem, the web is a very unstructured place, there’s plenty of information, but it’s hard to put in context and merge different sources.
Search engines have been encouraging websites to use more and more metadata to describe various types of content and data, but that’s not enough, Google has also been working on improving the algorithms that make sense of the data on the web.
The feature is in testing, or is being rolled out for now, there’s a chance of seeing it in action on google.co.uk, you can try signing in or singing out of your account to see if it works.
Google Search :)
- June 2013
- May 2013
- April 2013
- March 2013
- February 2013
- January 2013
- December 2012
- November 2012
- October 2012
- September 2012
- August 2012
- July 2012
- June 2012
- May 2012
- April 2012
- March 2012
- February 2012
- January 2012
- December 2011
- November 2011
- October 2011
- September 2011
- August 2011
- July 2011
- June 2011
- May 2011
- April 2011
- March 2011
- February 2011
- January 2011
- December 2010
- November 2010
- October 2010
- September 2010
- August 2010
- July 2010
- June 2010
- May 2010
- April 2010
- March 2010
- Download Code Editor for Windows 8
- AMD’s FX-9590 CPU hits 5 GHz
- PRISM Fallout: In Cloud We Don’t Trust?
- The Best Features Of iOS 7
- Chrome starts staking out mobile-browsing turf
- Android Dramatically Extends Lead With Open Source Developers
- Hadoop: What It Is And How It Works
- Dropbox vs. Google Drive vs. Amazon vs. Skydrive: Which One Is Fastest ?
- Google And SAP: Two Very Different Cloud Strategies
- BlackBerry to offer BBM as standalone app for iOS and Android this summer