The next “space race” might be the race to develop a synthetic model of the human brain – one that Google and Microsoft will participate in, if a report is true.
And instead of trying to beat the Russians, this time the Americans will be racing against the Europeans, who have already announced their plans.
The New York Times reported Monday that the Obama Administration is close to announcing the Brain Activity Map, which scientists quoted by the paper say could be on the scale of the The Human Genome Project, a $3.8 billion project to map the human genome that, the Times reported, returned $800 billion in jobs and other benefits.
The Brain Activity Map would attempt to document how the brain works, from the tiniest neurons up through how possibly the different regions of the brain communicate with one another. If the project succeeds, the Brain Activity Map might give us an understanding of how the human brain “computes” data through its complex web of neurons. It might also help scientists solve brain-related diseases like Alzheimer’s.
Modelling the human brain, and figuring out how it works, has long been one of the Holy Grails of supercomputing, prompting fears of a “technological singularity,” where successively advanced artificial intelligences design ever more refined versions of themselves, leading to a future where humans become increasingly irrelevant.
On a more realistic scale, learning how people think could allow services to begin anticipating their needs, a problem companies like Google and Microsoft would be interested in solving. The Times reported that a Jan. 17 meeting at CalTech was attended by the National Institutes of Health (NIH), the Defense Advanced Research Projects Agency and National Science Foundation, plus Google, Microsoft, and Qualcomm.
Google representatives did not return an emailed request for comment, possibly because of the U.S. President’s Day holiday. A Microsoft Research representative said that the company declined to comment.
Two of the foundations of the Times report were public statements: a tweet by NIH director Francis S. Collins, and a mention of the efforts to map the brain by President Obama in his State of the Union address:
“Every dollar we invested to map the human genome returned $140 to our economy,” Obama said, according to a transcript of the speech. “Every dollar. Today, our scientists are mapping the human brain to unlock the answers to Alzheimer’s. We’re developing drugs to regenerate damaged organs, devising new materials to make batteries 10 times more powerful. Now is not the time to gut these job-creating investments in science and innovation. Now is the time to reach a level of research and development not seen since the height of the space race. We need to make those investments.”
Collins then tweeted: “Obama mentions the #NIH Brain Activity Map in #SOTU”.
The Other Horses in the Race: the EU
Funding for the U.S. effort could last as long as 10 years, and possibly top $3 billion over that time. But the bar was set earlier by a massive collaboration among more than 80 European research agencies, which won an award from the EU of one billion euros ($1.34 billion) to develop a computer simulation of the human brain, known as The Human Brain Project.
That will partly cover the intriguingly named “Neuropolis,” a building dedicated to ”in silico life science” that will serve, at least in part, as the computer infrastructure behind the effort. The Swiss Confederation, the Rolex Group, and various third-party sponsors are backing this part of the effort.
“The HBP will build new platforms for “neuromorphic computing” and “neurorobotics,” enabling researchers to develop “new computing systems and robots based on the architecture and circuitry of the brain,” according to the The Human Brain Project.
Other Horses: IBM’s/DARPA SYNAPSE
The Defense Advanced Research Projects Agency, responsible for the initial funding and challenges to design self-driving cars and other public-private partnerships, has worked with IBM to develop SYNAPSE (Systems of Neuromorphic Adaptive Plastic Scalable Electronics), whose ultimate goal is to build a “build a cognitive computing architecture with 1010 and 106 synapses” – not a biologically realistic simulation of the human brain, but one where computation (“neurons”), memory (“synapses”), and communication (“axons,” “dendrites”) are mathematically abstracted away from biological detail.
A Network of Neurosynaptic Cores Derived from Long-distance Wiring in the Monkey Brain: Each brain-inspired region is symbolically represented by a picture of IBM’s SyNAPSE Phase 1 neuro-synaptic core. Arcs are colored gold to symbolize wiring on a chip. (Source: Dharmendra S Modha)
Using 96 Blue Gene/Q racks at the Lawrence Livermore National Laboratory, the most powerful supercomputer in the world, the team achieved 2.084 billion neurosynaptic cores containing 5310 neurons and 1.37×1014 synapses, according to the blog of Dharmendra Mohda, the leader of IBM’s Cognitive Computing division. That’s only 1,542 times slower than real time.
IBM assembled its diagram of the interconnections inside the cerebral cortex of the macaque, a small monkey, as an early model of how the brain works.
IBM’s Watson, of course, is another example of how a computer can interact with humans, absorbing the reams of unstructured data and winning Jeopardy, among other things.
Google itself last year sat down to try and develop its own neural network, and then presented it with data from its own network. The result, as was somewhat widely publicized, was that the network ended up constructing an internal image of a cat, and then spent its computational efforts deciding which YouTube videos were and were not cats.
So how could Google or Microsoft benefit from a federal partnership? On the surface, they might receive federal funding for research. Cognitive computing on the order of what IBM is hoping to achieve, for example, can take millions and millions of dollars, even if the computing resources are already available. (The Times reported that the CalTech meeting was designed to determine if sufficient computing resources were indeed available; the answer is yes, the paper reported.)
Thinking the way that humans think would allow Google or Microsoft to anticipate even more what their users want, and to provide them with that data. Both companies can do that to some extent through data accumulated from millions of users; if the most common “t” word I search for is Twitter.com, Google can start pre-loading the page in the background. But thinking like a human thinks, and making the seemingly random associations that humans make thousands of times faster than we make, could mean everything from artificially-crafted memes to pre-processed sound bites for politicians.
Version 7.0 of DotNetNuke, an open-source content-management system that you’ve probably never heard of, is now released and bringing enterprise-level web content functionality to users committed to Microsoft-based infrastructure. The .NET-based DotNetNuke will be a significant player in a growing cloud-computing environment where Microsoft Web servers may be more relevant.
Web Servers: Where Microsoft Doesn’t Rule
Depending on how you examine the data, less than one-fifth of the world’s sites run Microsoft-based Web servers like Internet Information Server. And, unlike all the cool kids running open-source code like Apache and nginx, IIS players don’t always want to run popular content-management systems like Joomla, Drupal or WordPress.
[Update: The preceding paragraph was corrected to update an error regarding the capabilities of CMS systems and IIS. -BKP]
Let’s be honest: 16.52% of the world’s tracked Web servers running IIS in November 2012 is tiny compared to Apache’s 57.23% share. But having almost 17% of servers locked up is still a heck of lot of sites — 103.3 million, actually. Even if just 1% of them need a WordPress-like CMS, that’s a little over a million sites pining away.
It’s all very well to snicker at these shops and prescribe spinning up some Apache-on-Linux servers, and installing Joomla or one of the other CMSes. But CIOs make IT investments for a strategic reason and have put a lot of time and money into their infrastructure. Generally, they’re not just rolling dice, which means it’s not always easy to get them to shift to non-Microsoft technologies.
Enter DotNetNuke Corp., maker of its self-named .NET-based platform. The CMS plays very well with native Microsoft tech and provides CIOs a robust tool that compares favorably with Drupal. Since its initial release in late 2002, DotNetNuke has enjoyed a strong following within the Microsoft ecosystem, and has about 800,000 registered users, according to Shaun Walker, co-founder and CTO of the company.
Filling The .NET Gap
The latest iteration of DotNetNuke has a broad range of new features, with a new interface that includes a more-robust editor and version-management system, as well as Active Directory support so enterprise employees can plug into site-content systems seamlessly. Cascading-style-sheet management is reportedly a lot easier to use, which should make designers happy.
DotNetNuke is a bit of an oddity within the Microsoft world. It’s actually an open-source licensed platform, using an MIT software license. The MIT license is what’s known as a permissive license, which means the code for the software is open but users and developers aren’t required to publish their changes, as with restrictive licenses like the GNU General Public License. Walker highlighted this as one reason why Microsoft-oriented customers don’t have a problem with using an open-source platform.
That DotNetNuke’s potential market is such a small minority of servers in the world might seem like a liability, but Walker believes that there is a lot of potential for DotNetNuke just around the corner. With the advent of HTML5 and Java-based sites, “pretty soon the underlying architecture won’t matter as much.”
If development does shift more to the client-side layer, then the Web server layer where Apache, nginx, and IIS live would become more of an abstraction. Given the relatively low cost of cloud-based instances of even IIS, companies with more .Net assets and developers might therefore migrate to IIS in order to streamline their IT resources.
That’s the vision Walker has, but it remains to be seen if IIS can experience strong growth, even in the cloud, up against the free Apache and nginx servers.
For now, DotNetNuke soldiers on, filling a gap for IT managers who are still dedicated to the Microsoft Way.
Microsoft appears to be working on an augmented reality headset of its own, similar to Google’s Project Glass.
In a new patent application, the company describes a glasses-based system that overlays information onto the user’s view.
Unlike the Google version, though, it’s envisioned as something you’d wear specifically for live events rather than all day every day – at a baseball game, for example, where scores and other information could be displayed.
The glasses could be dished out to spectators at the beginning of an event, in much the same way as 3D glasses are at the movies today.
“A user wearing an at least partially see-through, head mounted display views the live event while simultaneously receiving information on objects, including people, within the user’s field of view, while wearing the head mounted display,” reads the application.
“The information is presented in a position in the head mounted display which does not interfere with the user’s enjoyment of the live event.”
Eye tracking would be used to work out where the user’s looking, and GPS to work out precisely where they are, and the data tailored accordingly.
While the patent application doesn’t mention the Xbox, the system looks an awful lot like the AR glasses leaked this summer as part of an internal Microsoft presentation on the future of the Xbox.
And as an eagle-eyed Geekwire writer noticed, one of its two inventors is Kathryn Stone Perez, executive producer on the Xbox incubation team.
Windows 8 is almost here, but despite Microsoft’s best efforts, there just aren’t that many Windows 8-style apps available yet. To kick-start the Windows 8 development community, Microsoft today announced that it is hosting a global hackathon in over 60 cities from November 9 to 11. Registration for the event is now open.
The hackathon, which Microsoft decided to call “Wowzapp 2012,” is mostly geared toward students, but a Microsoft spokesperson told me that it is open to all developers.
Microsoft will provide all participants with the necessary tools to build their apps, including Visual Studio 2012 Express (which, just like Visual Studio Professional, is free for students through Microsoft’s DreamSpark program). At the event, Microsoft app experts, developers and trainers will be on hand to help the participants develop their apps (or put the finishing touches on their existing apps). In addition to this help, participants will also receive a Windows Store registration code so they can submit their apps for to the Store.
“Windows 8 represents a prime opportunity for students to gain practical experience as developers and potentially earn money through app downloads in the Windows Store, before even graduating from college,” says Microsoft. “Whether a student wants to offer their application for free or make money from paid apps or advertising, the Windows Store provides the flexibility to do so.”
In addition to this program, Microsoft is also running Generation App and other initiatives to motivate developers to write apps for Windows 8. Just last month, Microsoft also hosted Appfest in Bangalore, India, the world’s largest non-stop coding marathon, where over 2,500 developers write Windows 8
The Linux Foundation has proposed a solution for the current conundrum Linux is facing, with the introduction of Secure boot specification for UEFI.
UEFI, Unified Extensible Firmware Interface, or as the Linux community calls it “The Secret Plan of Microsoft to Take Over the World” (cue evil laughter), is thought more as a necessary evil.
Unfortunately, the implementation of Secure boot has proven to hinder the development of Linux distributions. Secure boot can prevent the loading of an operating system that is not signed with an acceptable digital signature.
The Linux Foundation has found a solution to this problem, as explained by James Bottomley, from Linux Foundation Technical Advisory Board.
“The Linux Foundation will obtain a Microsoft Key and sign a small pre-bootloader which will, in turn, chain load (without any form of signature check) a predesignated boot loader which will, in turn, boot Linux (or any other operating system),” said Bottomley.
The pre-bootloader has a few protections in place, insuring that it cannot be used as a vector for any type of UEFI malware to target secure systems.
This pre-bootloader can be used either to boot a CD/DVD installer or LiveCD distribution or even boot an installed operating system, in secure mode, for any distribution that chooses to use it.
Microsoft has yet to provide a signature, but The Linux Foundations says it is just a matter of time. The pre-bootloader will be available to download from their website.
James Bottomley also provided some technical details about the project. “The real bootloader must be installed on the same partition as the pre-bootloader with the known path loader.efi (although the binary may be any bootloader including Grub2). The pre-bootloader will attempt to execute this binary and, if that succeeds, the system will boot normally,” stated The Linux Foundation representative.
More information about the pre-bootloader will be made available once The Linux Foundation obtains the Microsoft key.
The latest language from the company once identified for its programming languages seeks to bring a higher class of developer into the Web apps space, without changing the foundation of the Web… even if such a change wouldn’t be such a bad idea.
As with so much else on the Web, platform engineers are largely of the mindset that it’s too late to do much about it now. The exceptions are companies whose backbones still have some swagger to them, especially in the face of something new called “competition.” While Microsoft has been taking fewer risks quantitatively of late, the risks it does take have been bigger: the Start Screen in Windows 8, the expansion of Xbox into a media platform, the splicing of Windows Phone with Windows PC, the abandonment of Silverlight in favor of WinRT.
One Giant Step Up From Level II BASIC
Microsoft’s introduction of TypeScript is not that big, and is not really a risk. In terms of product, it’s a free Visual Studio add-on (downloadable here) that enables more learned, professional developers to adopt more formal approaches in producing code for the Web. In terms of marketing, it’s a nearly no-cost way for Microsoft to put its stake in the ground in territory Google has been working to claim for itself.
Making The Editor The Enforcer
But for developers to get behind any language – even a supplemental one – they need a rich development environment that understands it natively, as rich as Eclipse for Java. Progress on that front for Dart has been mixed, which is not uncharacteristic of projects at Google.
By comparison, TypeScript has the virtue of inserting itself into an development environment that’s already somewhat rich: Visual Studio. Once the add-on is plugged in, VS 2012 recognizes TypeScript as a formal file type.
Then as you’re developing the script, as this sample from VS 2012 shows, the editor keeps track of the proper types of each variable, even when in this case, it has yet to be assigned a value. Here, pointing to member function getDist() reveals a tip showing it to be a function (the closed parentheses) whose return value is of type number.
Insert Devious Plot Here
If Microsoft is guilty of falling into any familiar pattern with TypeScript, it’s that it’s not the first product in its class. What TypeScript has going for it, though, is no particularly good reason not to be adopted by Web apps developers, except for the possibility of a preferable alternative. Standards are for communications systems and interfaces; options are for people. TypeScript is one more option, and in my view so far, a sensible one.
Of course, no information is yet available, but James Akrigg, Microsoft’s head of technology for partners, confirmed at Misco Expo 12 in the United Kingdom that Windows 9 is the next major project to be released by his company.
“It should just work. I’m not going to say we’re going to do it [reach perfection] with Windows 8 because we’re already working on Windows 9,” he said when asked about his expectations regarding Windows 8.
Curiously, sources close to the matter are also suggesting that Microsoft is working on a different Windows iteration called Windows Blue, but this one may actually be the first major improvement to the yet to be released Windows 8.
The formal release of the final Windows Server 2012 this week sets up Microsoft for a showdown in the enterprise datacenter with its newly re-armored arch rival VMware. At issue is whether an operating system based on a consumer-grade client belongs in a server that runs thousands of virtual machines at one time.
Microsoft has not entered this battle unprepared. Small, mobile devices are the drivers of technology stories. Platforms are the drivers of technology. At the heart of Microsoft’s core marketing strategy for the past quarter-century has been faith that its ability to deliver a solid platform will secure its future as a software provider for devices, and vice versa.
From the perspective of device users, there appear to be two operating environments, and the crux of competition is therefore seen as an epic battle between Linux and Windows. This is about as accurate a picture of the data center as anyone’s perspective of Russia from his or her own front porch in Alaska. In reality, Linux and Windows Server are both common components of networked computing environments. We rely on both.
The Real War
In recent years, Linux has found its place as a bedrock foundation for network computing platforms. It’s small, carries less baggage, and is highly adaptable. Windows Server is, by comparison, bulky, although since 2010, its less graphically dependent Server Core option has rapidly gained favor, especially now that it can be managed from the command line using PowerShell. But every modern data center today uses virtualized workloads, because they’re more efficient, easier to manage, safer and more secure. Virtualization is the key to cloud computing – the addition of a layer of abstraction between software and hardware, so that applications run in an environment that is not constrained by operating parameters or location.
Windows Server vs. Linux is no longer the battle of the century, except perhaps in some comic book drawn by kids who wouldn’t know a data center if it abducted them from their parents’ basements. True, Microsoft is waging a market battle against Red Hat, but it’s not for control of the bedrock operating system of server processors. And Red Hat isn’t even the most awesome competitor here. That would be VMware, whose new CEO Pat Gelsinger hails from Intel, and who comprehends the dynamics of processors and their operating systems as thoroughly as any executive of any company, anywhere.
Gelsinger has thrown down a guantlet that aims to obliterate the present data center model, replacing it with components that render the processor OS either immaterial or non-existent. Windows Server would retain its strengths as a staging environment for critical business applications like SharePoint and Exchange, and systems like SQL Server. But that would be a tenuous position for Microsoft: remaking the image of Windows Server from a grounded platform to a floating raft, riding the waves generated by VMware and its growing network of partners.
While Microsoft would love to be able to own and operate the metaphor of floating on a cloud, it can’t afford to be perceived as floating on anything right now. So although the company did invoke its “Cloud OS” moniker (not really a trademark), during the formal premiere of Windows Server 2012, it had to present itself as rooted, as strengthening its own foundation, as extending the number of reasons why existing businesses should refrain from either investing in VMware virtualization platforms or experimenting with real cloud OSes – one of which happens to be produced by VMware.
“We built Windows Server 2012 with the cloud OS in mind,” remarked Bill Laing, Microsoft’s corporate VP for server and cloud, in a video released Tuesday. That’s a very carefully phrased metaphor – a bit like saying you’re cooking something with a meal in mind, as opposed to cooking a meal.
“Microsoft runs some of the world’s largest data centers and Internet-scale services,” Laing continued. “This uniquely positions Microsoft to pour all of that learning into our products, test them at scale and use our unparalleled experience in transforming data centers to address the needs and pressures of this new era of IT.”
The Package And The Payload
Laing went on to correctly define the modern data center as a provider of resources through services that are scalable to suit varying workloads, that can be pooled or shared so that they transcend location, that are perpetually available and backed up, and that can be effectively automated. That much is preaching to the crowd. But for Microsoft to reserve a place for itself at the table, it needs Windows Server 2012 to be a delivery vehicle for critical components of the data center, in a manner that parallels how Windows 7 and Windows 8 are, effectively, delivery vehicles for Office.
By “delivery vehicle,” I mean something that is delivered to the data center, that roots Windows Server in its existing location and hopefully lets it expand from there. In this case, there are two somethings in particular:
- Hyper-V is Microsoft’s hypervisor component – the part that enables an operating system (which can be scaled down to Server Core size) to run any number of virtual machines on behalf of clients. To that end, Server 2012 expanded Hyper-V’s statistical maximum capabilities dramatically: 320 logical processors per server, and up to 64 virtual processors per virtual machine (imagine an OS that thinks it’s running on quad-quad-quad core) with up to 1TB of addressable memory per VM. VMware’s ESX statistics may be comparable – assuming you want to go through the trouble of comparison, which requires fathoming that company’s arcane licensing model. While VMware holds the overall market share lead in virtualization, Microsoft continues to exploit its advantage with smaller businesses, seeding them with Hyper-V and growing them into customers the way it’s done before with SharePoint and SQL Server
- Windows Azure (probably just “Azure” at some point) is Microsoft’s public cloud, whose principal role has now officially changed from a Platform-as-a Service (PaaS) provider of .NET Framework services in the cloud, to an Infrastructure-as-a-Service (IaaS) host for virtual Windows machines. All Windows Server 2012 systems will include the ability to migrate workloads to the Azure public cloud, which does make Server 2012 to some degree a “cloud OS.” If Microsoft has one critical advantage over its competition, it’s the ability to introduce new capabilities to customers in small doses. Some folks are liable to try this out just to see what it does, whereas there’s no possibility of that happening with any other brand that requires a sizable up-front investment. Meanwhile, developers will continue to be able to deploy applications to Azure on a pay-for-use basis.
One important and impressive addition to Windows Server 2012 is worth noting here: By enabling virtual subnets that span geographies, you can create single subnet loops that span geographies. This way, if you have two data centers in different cities, you can live migrate a virtual machine between those data centers just as if they were situated right next to each other. VMware may offer similar capabilities – it’s not as though Microsoft invented this. But what you have to pay to get it with ESX is a significant talking point.
No Two Windows Servers Are Alike
In any discussion of server operating systems, one underappreciated aspect has been their roles. When Microsoft began endowing Windows Server 2003 with roles, it was with the notion that servers ran services the way clients ran applications. You want the server to do more, you run more services. You want more services, you add more servers. In those early days, the server operating system was somewhat monolithic. Today, think of roles like building blocks. When you select roles in installing Windows Server 2012, you’re assembling elements of the operating system. Different sets of roles make for a different operating system.
That fact is important in this context for the following reason: By successfully producing a single delivery vehicle for any number of various server roles, Microsoft has positioned Windows Server to compete with many tiers of products on many levels: against VMware and Citrix XenServer for virtualization, for instance, and against Red Hat for infrastructure and databases. This makes Windows Server one of Microsoft’s most successful and most critical strategic assets. It also places the operating system in a very tenuous position, because this capability centers around the notion that admins install it first. If admins install something else first, the game is over.
Almost nothing is challenging Windows Server’s qualifications to serve as an application host. But that’s not where the payoff is. For Microsoft to secure a permanent place for Windows Server 2012 in the data center over the next four years, it needs to make the case that scalability and versatility are not only feasible but practical with Server 2012 on the ground floor. If Bill Laing takes a trip to the ground floor anytime soon, though, he’s likely to find Pat Gelsinger already waiting for him.
Microsoft has organized its Imagine Cup student technology competition for the last 10 years and today, the company opened registration for the 2013 edition of this event. Students ages 16 and older can now register for their national events and the winners of these local events will be flown to St. Petersburg, Russia, where the worldwide finals will take place from July 8 to 11. For this edition of Imagine Cup, Microsoft has doubled the prize money to $300,000.
Microsoft also reorganized the competition around three new core areas: world citizenship, games and innovation. Previously, the flagship event was the software design competition, which a group of Ukrainian students won this year after developing gloves that can translate sign language into speech.
Since the first Imagine Cup in 2003, says Microsoft, over 1.65 million students across the globe have participated in Imagine Cup and a number of the teams that made it to the finals (and many that have not) went on to create startups. 2011 finalist Team OaSys from Jordan, for example, is currently working on bringing its system that allows quadriplegics to control a computer to market. To help the finalists commercialize their ideas, Microsoft also allows them to apply for its three-year, $3 million Imagine Cup Grants.
Here is Microsoft’s description of the new core competitions:
- World Citizenship: Honors the software application developed on Microsoft platforms with the greatest potential to make a positive impact on humanity. For example, a project might address education-, social- or healthcare-related problems.
- Games: Honors the most engaging and entertaining games targeting teens and youth, built on Microsoft platforms (Windows 8, Windows Phone, Kinect for Windows Software Development Kit, and Xbox Indie Games).
- Innovation: Honors apps that give consumers inspiration and innovation at their fingertips, whether it be a new spin on social networks, online shopping or search, built with Microsoft tools and technology.
The winners of each of these competitions will get $50,000.
In addition to the core competitions, students can also compete in a number of online challenges focused on specific technologies and platforms, including Windows 8, Windows Azure and Windows Phone.
For the first time in 25 years, Microsoft has changed its logo, the company announced Thursday. The new logo draws heavily from the look and feel of the typeface formerly referred to as “Metro” used in the company’s upcoming Windows 8 operating system.
“It’s been 25 years since we’ve updated the Microsoft logo and now is the perfect time for a change,” Jeffrey Meisner, the general manager in charge of brand strategy for Microsoft, wrote in a blog post. “This is an incredibly exciting year for Microsoft as we prepare to release new versions of nearly all of our products. From Windows 8 to Windows Phone 8 to Xbox services to the next version of Office, you will see a common look and feel across these products providing a familiar and seamless experience on PCs, phones, tablets and TVs.”
The new corporate logo adorns the Microsoft Web page, and will be used to sign off all Microsoft TV advertising as well as other forms of marketing. The logo also currently appears on three Microsoft retail stores today in Boston, Seattle’s University Village and Bellevue, Washington. It wil be added to all Microsoft stores in the next few months, Microsoft said, although the company may need some time to replace it across all of its digital properties and pages.
Microsoft, which halted use of the term “Metro” after a copyright claim from a German company, is now simply highlighting the use of the “Segoe” font that had been used elsewhere in its product line. The new logo reworks both the company’s logotype, or the Segoe font, as well as the Microsoft symbol itself. Now, the logo combines four pastel colors – orange, green, blue and yellow – arranged into a square.
In an accompanying video (see below), Microsoft explained that the four colors are associated with its main product lines: blue for Windows, orange for Office, and green for Xbox. Yellow, presumably, will be associated with Microsoft’s enterprise products, some of its most profitable.
Microsoft maintained the same colors and placement as the conventional logo the company offered for several years. But gone is the sweeping “flag” motif, where Microsoft’s image appeared to be swept up by some unknown force. (Update: A microsoft spokeswoman said the “flag” was never officially part of the Microsoft logo, and that this is the first time Microsoft has added a symbol – the four squares – to its company logo.)
The changes are most likely a deliberate choice: in February, Sam Moreau, principal director of user experience for Windows, reported that the company had brought in design agency Pentagram to rework the Windows logo – the first sign that big changes wre coming to the Microsoft iconography.
“Paula [Scher, a Pentagram designer] asked us a simple question, ‘your name is Windows. Why are you a flag?’ Moreau wrote. Now, the new logo actually provides a different slant on the Windows motif, which was offset to represent a “slight tilt in perspective.” The new Microsoft logo is front-facing, square and solid. The symbol is important in a world of digital motion, Meisner said. “This wave of new releases is not only a reimagining of our most popular products, but also represents a new era for Microsoft, so our logo should evolve to visually accentuate this new beginning,” Meisner wrote. Early reactions have just begun to trickle out: “The new Microsoft logo reminds me of placeholder logos I drop in wireframes until I design a real one,” tweeted Andrew Heaton, a user experience designer.
Google Search :)
- June 2013
- May 2013
- April 2013
- March 2013
- February 2013
- January 2013
- December 2012
- November 2012
- October 2012
- September 2012
- August 2012
- July 2012
- June 2012
- May 2012
- April 2012
- March 2012
- February 2012
- January 2012
- December 2011
- November 2011
- October 2011
- September 2011
- August 2011
- July 2011
- June 2011
- May 2011
- April 2011
- March 2011
- February 2011
- January 2011
- December 2010
- November 2010
- October 2010
- September 2010
- August 2010
- July 2010
- June 2010
- May 2010
- April 2010
- March 2010
- Download Code Editor for Windows 8
- AMD’s FX-9590 CPU hits 5 GHz
- PRISM Fallout: In Cloud We Don’t Trust?
- The Best Features Of iOS 7
- Chrome starts staking out mobile-browsing turf
- Android Dramatically Extends Lead With Open Source Developers
- Hadoop: What It Is And How It Works
- Dropbox vs. Google Drive vs. Amazon vs. Skydrive: Which One Is Fastest ?
- Google And SAP: Two Very Different Cloud Strategies
- BlackBerry to offer BBM as standalone app for iOS and Android this summer