When you use both Microsoft Outlook 2013 SP1 and Microsoft Exchange Server 2013 SP1, one of the more interesting features you can take advantage of is MAPI over HTTP (code-named alchemy). MAPI over HTTP provides the ability for Messaging API (MAPI) clients and servers to communicate across a HTTP connection without using remote procedure calls (RPCs). Ever since its debut back in 1996, Exchange and its clients have used RPCs. The elimination of the now-aged mechanism marks the conclusion of a modernization process that began more than a decade ago.
The Roots of RPC in Exchange
RPC has a long and noble history. In the early 1990s, Microsoft built on the work of the Open Software Foundation (OSF) to create RPC as an interprocess communications (IPC) mechanism that underpinned client-server applications such as Outlook and Exchange. Applications that use RPCs don’t have to worry about the details of communication across local and remote networks through different protocols because the RPC layer is responsible for this activity.
Related: Exchange Server 2013 SP1: A Mixture of New and Completed Features
In effect, RPCs allow applications to get on with the task of providing their unique functionality instead of having to constantly reinvent the networking wheel, even if occasionally you need to mess around with settings to make everything work. (The method described in “XGEN: Changing the RPC Binding Order” might bring back memories—or nightmares—about some of the fine-tuning required in early Exchange deployments.) In early Exchange deployments, the RPCs connecting Exchange and its clients traveled across many different protocols, including TCP/IP, NetBIOS, and named pipes. Over time, the focus shifted to TCP/IP, which became the de facto standard for Exchange 2000 and later.
The Problems with RPC
Although RPC delivers significant advantages to application developers, it’s an old mechanism that was originally designed to work across LANs rather than across the Internet. The age and relative lack of recent development in the RPC mechanism is reflected in much of its documentation, such as the “How RPC Works” article, which is based on Windows Server 2003 and last updated in March 2003.
Today, more and more of our communications flow over the Internet. The trend to use cloud-based services is just one influence that has driven traffic to the Internet. Mobility is another important influence, as people increasingly take advantage of sophisticated mobile devices and high-speed wireless networks at work and at home. Both influences cause problems for applications that use RPCs.
The first problem is that RPC communications are sensitive to disruptions due to network hiccups (a well-known feature of the Internet). RPCs use fixed buffer sizes, which means that an application like Outlook might have to make multiple calls to a server to retrieve or send information. This situation isn’t improved by the fact that MAPI is a verbose protocol, where the transmission of messages from client to server involves a mass of properties and values.
Outlook does its best to insulate users from network glitches. The introduction of the cached Exchange mode, drizzle mode synchronization, and various optimizations to minimize network consumption in Outlook 2003 provided users with a reliable and robust solution for working when networks weren’t dependable. At that time, the problem was more with dial-up telephone connections than Wi-Fi, but the foundation was set and Outlook has built on it ever since. It’s doubtful that Microsoft could’ve been so successful with Office 365 if Outlook didn’t work so well across the Internet.
However, things are a bit uglier behind the scenes. If network glitches occur, Outlook can end up in a cycle of constant retries to perform common operations such as downloading new items. And the sensitive nature of RPCs mean that Outlook often has to restart activities because a disruption happens, such as moving between two wireless access points. The chatty nature of RPCs, the increasing size of messages, and large attachments (e.g., digital photos, music files, Microsoft PowerPoint presentations) all make for more extended connections. The extended connections, in turn, expose those communications to more disruption. In the end, Outlook consumes many bytes on the wire just to get email updates done.
Microsoft introduced RPC over HTTP connections—aka Outlook Anywhere—to help with this situation. It was first used by Exchange Server 2003 to allow Outlook 2003 to connect to mailboxes without creating a VPN. However, it’s also a fairly old technique that hasn’t really changed much since its introduction in Windows 2000. Plus, Outlook Anywhere poses some unique challenges of its own. For example, HTTP is a half-duplex connection—in other words, it can carry traffic in a single direction. RPCs need full-duplex connections with synchronized inbound (RPC_IN_DATA) and outbound (RPC_OUT_DATA) links. Outlook Anywhere solves the problem by using two HTTP connections to carry the RPC traffic and session affinity to keep the links synchronized with each other. This arrangement is well known to administrators who set up load balancers to deal with Outlook Anywhere connections. Microsoft eased the problem somewhat in Exchange 2013 by moving the responsibility for handling session affinity to the Client Access Server, but the two connections are still used.
Microsoft has a lot of experience with other clients that use HTTP without RPCs. Both Exchange ActiveSync (EAS) and Outlook Web App (OWA) clients use longstanding HTTP connections to communicate with Exchange. These connections don’t require the complicated handshaking used when RPC connections are made, which means that they work much better than Outlook over low-quality links. Mobile devices in particular tend to hop between networks all the time, which creates a challenge in terms of maintaining connectivity with a server. EAS and OWA are both able to manage this kind of environment better than Outlook, which continually loses and restores connections at the expense of a great deal of network activity, most of which is hidden from end users by the cached Exchange mode.
Similar to EAS and OWA, MAPI over HTTP uses a kind of “hanging GET” transaction for Outlook notifications. Simply put, the client establishes a long-lived HTTP GET to the server so that the server knows the client is interested in notifications such as the arrival of new mail. If something happens on the server, the transaction is closed and the client issues a command to fetch the new data. If nothing happens during the GET interval (e.g., 20 minutes), the transaction is closed and a new HTTP GET is established.
Outlook is a more complicated and sophisticated client than an OWA or EAS client (e.g., Mobile Outlook on Windows Phone). As such, you’d expect that Outlook consumes more bandwidth and executes more transactions with Exchange and other servers. For instance, Outlook 2013 fetches social media information from Facebook and LinkedIn, whereas OWA and EAS don’t. More important, when Outlook synchronizes with a mailbox, it downloads full-fidelity copies of items and attachments and usually processes every folder in the mailbox. Some control over the amount of data downloaded is available by restricting Outlook 2013 to downloading a certain window of data. However, Outlook typically performs a complete synchronization, whereas OWA or EAS typically concentrate on just a few folders and perhaps just the last few weeks of information from those folders. In addition, OWA and EAS don’t download attachments until requested by the user.
All this is of little importance when Outlook connects using a high-quality, fast network. It becomes very important when the network experiences frequent drops and reconnects, or when the bandwidth proves insufficient. At this point, an OWA or EAS client becomes a lot more usable in terms of getting work done. I discovered this in a very real sense when travelling in Australia last year, where some of the public Wi-Fi networks are less than good. OWA or EAS worked fine. As for Outlook, let’s just say that I used OWA far more than usual during that trip.
The complexity of troubleshooting Outlook connections with Exchange is also cited as a reason for change. Once multiple layers are involved, it becomes more difficult to trace connections and determine the root cause. Problems with Outlook connecting to Exchange (both on-premises and to Office 365) have been at the top of the support issues list for years, and the developers hope that simplifying the communications layer will make support easier. Tools such as the Microsoft Exchange RPC Extractor are available to help parse network captures and interpret the contents of RPCs, but the output can be difficult to understand (The TechNet Magazine article “How IT Works: Troubleshooting RPC Errors” provides some useful background.) The hope is that it will be easier to make sense of the HTTP traffic in conjunction with other data (e.g., IIS logs) to understand the flow of client connections.
So, in a nutshell, RPC is:
- An aging communication mechanism originally designed for LAN connections that has been extended to struggle with the unique demands of the Internet
- A mechanism that’s delivered to end users through clients that do their best to disguise and hide the underlying problems
- A mechanism that’s difficult to debug and support
This sounds like an opportunity for improvement, which is exactly what the MAPI over HTTP initiative seeks to deliver.
MAPI over HTTP
Although not much has happened in the RPC world for the last decade, the same isn’t true for HTTP. It has received a huge amount of development attention from leaders in the industry, including Microsoft. Apart from bug fixes, not much can be expected for RPC in the future, so if you were a development group, would you put your proverbial eggs in the HTTP or RPC basket? Aside from the inevitable problems involved in making a change to the way products like Outlook and Exchange work, the choice seems pretty obvious if you want to take advantage of new techniques and features that are likely to come along with the HTTP protocol over the next few years.
Figure 1 shows the two modes of communication that Outlook 2013 SP1 can use with Exchange 2013 SP1. On the left, you have the older RPC-style connections over either TCP/IP (for Exchange Server 2010 and earlier) or HTTP Secure (HTTPS—for Exchange 2013). In this instance, RPCs form the middle ground to link the client and server. However, that middle ground creates an extra layer of complexity if a need arises to debug connections. On the right, you can see how MAPI over HTTP replaces the dependency on RPC by directing the remote operations executed by Outlook across HTTP connections. This is different than RPC over HTTP because the RPCs used to carry MAPI instructions (wrapped in TCP/IP or HTTP packets) are no longer used. Instead, the MAPI instructions are sent directly over an HTTP link, which is what most Internet applications use to convey their data.
A change like this involves a great deal of work on both the client and server, which is the reason why it only works when Outlook 2013 SP1 and Exchange 2013 SP1 work together. Remember that RPC is designed to relieve applications from the need to worry about IPC by providing a library of common functions that the applications can use to make connections. For backward compatibility with older clients and servers, Outlook 2013 SP1 and Exchange 2013 SP1 still contain all the RPC code, but they also have a new communications library that can be used to direct connections across an HTTP link.
When Outlook 2013 SP1 connects to an Exchange 2013 SP1 server, it advertises its ability to use MAPI over HTTP by sending an indicator in its initial connection. If MAPI over HTTP is enabled for the organization, Exchange responds by providing a set of URLs in the Autodiscover XML manifest returned to Outlook. The URLs point to the connection points (an IIS virtual directory for MAPI over HTTP) that Outlook can then use to establish the HTTP connection. If Outlook connects to Office 365, it receives only a set of external connection URLs, whereas on-premises Exchange servers (in a hybrid or pure on-premises configuration) transmit both internal and external connection points. Outlook attempts to use the internal connection first, then fails over to the external connection if necessary. Each set contains URLs for the mail store (mailbox databases) and directory, corresponding to the emsmdb and nspi interfaces used by Outlook to access mailboxes and the address book. (The Exchange Team Blog “Understanding how Outlook, CDO, MAPI, and Providers work together” provides some useful information about these interfaces.)
Exchange 2013 supports Outlook 2010 and Outlook 2007 clients, too. Right now, it’s unclear whether the MAPI over HTTP update will be back ported to these clients, so for now, they’ll continue to use RPC over HTTP. Customer demand and business requirements might lead Microsoft to upgrade Outlook 2010. It’s much less likely that Microsoft will invest the engineering effort to upgrade Outlook 2007 given the age of the client. As future clients and servers are delivered over time, Microsoft will remove support for RPC over HTTP in the same way it removed support for RPC over TCP/IP (in Exchange 2013) and UDP (initially in Exchange 2010 and finally in Exchange 2013).
Bumps Along the Road
It all sounds simple, but like many other changes in technology, using MAPI over HTTP requires some planning, testing, and other work on the part of administrators. First, MAPI over HTTP isn’t enabled by default. You have to enable it by updating the organization configuration to let mailbox servers know that it’s OK to tell Outlook clients that MAPI over HTTP is available.
As shown in Figure 2, this is a one-time operation performed by running the Windows PowerShell command:
This shouldn’t be done until the entire organization is upgraded to Exchange 2013 SP1 and you have deployed Outlook 2013 SP1 clients. Office 365 tenants won’t get to vote on when this update happens, as the responsibility for making the change lies in the hands of Microsoft.
Second, switching network communication mechanisms doesn’t come without some pain for clients. Outlook maintains information about its configuration in MAPI profiles. The change to MAPI over HTTP requires that the profiles be refreshed with the new information. Afterward, Outlook has to switch to the new MAPI over HTTP endpoints. The switch can’t be done on the fly, so Outlook has to exit and restart. Users will therefore see the infamous message: The Microsoft Exchange administrator has made a change on the server that requires you to quit and restart Outlook. After Outlook is restarted, it connects to MAPI over HTTP and everything starts to flow as planned.
A side effect of the switchover to MAPI over HTTP is that the Outlook profile page that controls the Outlook Anywhere settings, which Figure 3 shows, will no longer be visible. This is logical because Outlook Anywhere is no longer in use, but you might have to update end-user documentation to reflect the new reality.
If all the servers in the organization run Exchange 2013 SP1 (or later), the change to MAPI profiles should be a one-off affair. However, if you maintain some down-level servers (earlier versions of Exchange 2013, Exchange 2010, or Exchange 2007) and move mailboxes from Exchange 2013 SP1 to those servers, the MAPI profiles for those mailboxes will need a further refresh.
Users might or might not be concerned about having to restart Outlook. If users accept this kind of thing, they’ll restart Outlook as directed. However, if users are worried about it, they might create a Help desk support ticket. Given that the switchover is a one-time organization-wide event, you might have a situation in which hundreds of users return to work after a blissful weekend, resume their PCs from hibernation, see the message, and call the Help desk. Some careful communication and guidance to users is required here.
Third, no real information is available yet about the possible impact on performance (client or server) or network bandwidth. Although RPCs can be fragile, they’re compact. Pure HTTP traffic is likely to generate more bytes on the wire. For instance, the Exchange developers have decided to use HTTP headers to carry diagnostic information in MAPI over HTTP connections. This information is useful in debugging connectivity problems that might arise, but the additional traffic might be a problem for companies that depend on remote connections, such as those with highly mobile users or Office 365 tenants. Microsoft is likely to release more data about bandwidth requirements that compares MAPI over HTTP with RPC over HTTP. This data (or your own data) should be used to determine whether your network connections’ capacity needs to be upgraded. In addition, if you use appliances (e.g., load balancers, WAN accelerators), you should check with the vendors to determine whether any configuration changes should be made to optimize for MAPI over HTTP traffic.
Finally, there’s always the potential for edge cases that aren’t tested during development to pop up soon after an update is shipped to customers. Thinking about what might happen here, you need to check if you have any particular aspect of your environment that Microsoft is unlikely to test. For example, you might have a particular Outlook add-on that might cause problems after the switchover. Given that add-ons depend on Outlook networking, I think that this is an unlikely scenario, but it’s wise to test the entire configuration of your operational environment as thoroughly as possible before enabling such a profound change.
Microsoft believes that the transition to MAPI over HTTP will help Outlook improve its ability to work in a world where mobile communications are an absolute requirement. It makes sense for technology to adapt and embrace new conditions rather than look back to a time when the only available networks were dial-up telephones and safe corporate Ethernets. RPCs have had their day. They’ll remain in use for as long as old clients exist (and are supported), but the writing is firmly on the wall. HTTP-based connections are the lingua franca of the Internet and that’s what Outlook will use in future.
The Right Approach
Installing Exchange 2013 SP1 across an organization won’t force you to switch to MAPI over HTTP. For now, you can continue to use RPC over HTTP until you are ready to switch, most likely sometime in the future when you’ve upgraded all your desktops to Outlook 2013 SP1. It’s possible that there will be a time when using MAPI over HTTP is mandatory, but that’s likely to be well in the future when new versions of Exchange and Outlook are built to use only this mechanism.
I see lots of good things about this transition. It doesn’t seem to make a lot of sense to keep old networking and communication technologies in place when they struggle to deal with modern operating conditions. Developing a new approach that’s designed to cope well with the kind of connections being used today seems like a good idea. Although you can expect some bumps along the road until the inevitable configuration, operational, and programming problems are solved, I hope that the implementation occurs without undue disruption to end users. When it comes to technical evolution, good planning, comprehensive testing, and solid execution seem like the right approach to take.
Microsoft has just unveiled the brand new Visual Studio 2013 development suite and together with it, the company also introduced cloud-based companion called Visual Studio Online.
This new service is based on Microsoft’s very own Windows Azure and comes with a freeware license for development groups of up to five users.
Visual Studio Online will, however, be offered in four different options, namely Visual Studio Online Advanced, Visual Studio Online Professional, Visual Studio Online Basic, and Visual Studio Premium with MSDN.
This last version comes with support for an unlimited number of users, as well as with integration with lots of development tools, such the desktop-based Visual Studio, Eclipse, and Xcode. It also includes support for Office 365 business apps and tools to host team projects on-premises and in the cloud.
At the same time, Visual Studio Online comes with support for Monaco, a new coding environment specifically designed for the cloud. According to ZDNet, Microsoft recently revealed that work on Monaco has started three years ago, as it’s based on a service providing users specific Visual Studio features right within a browser.
“Visual Studio Online, formerly Team Foundation Service, is the home for your project data in the cloud. Get up and running in minutes on our cloud infrastructure without having to install or configure a single server,” Microsoft said in the introduction of the new cloud-based service.
“Set up an environment that includes everything from hosted Git repos and project tracking tools, to continuous integration and an IDE, all packaged up in a monthly per-user plan. Connect to your project in the cloud using your favorite development tool, such as Visual Studio, Eclipse or Xcode.”
In the newly released Volume 15 of the Microsoft Security Intelligence Report (SIRv15), one of the key findings to surface relates to new insight on the Windows XP operating system as it inches toward end of support on April 8, 2014.In this post we want to highlight our Windows XP analysis and examine what the data says about the risks of being on unsupported software. In the SIR, we traditionally report on supported operating systems only. For this analysis we examined data from unsupported platforms, like Windows XP SP2, from a few different data points:Earlier today we published a blog post that discussed a new metric for analyzing malware
Microsoft will acquire Nokia’s devices and services unit and license the company’s mapping services in a deal worth $7.2 billion in a bid to bolster the company’s position in the smartphone market.
The software giant will pay $5 billion for “substantially all” of Nokia’s phone unit and another $2.2 billion to license its patents, the companies announced late Sunday. As part of the deal, Stephen Elop will step down as Nokia chief executive to become the executive vice president of the devices and services division. Elop, a former Microsoft executive, is one of a handful of candidates suggested to replace Microsoft Steve Ballmer, who is expected to retire by next summer.
“Today’s agreement will accelerate the momentum of Nokia’s devices and services, bringing the world’s most innovative smartphones to more people, while continuing to connect the next billion people with Nokia’s mobile phone portfolio,” Ballmer and Elop said in a joint statement.
Elop, the former president of the Microsoft Business Division, left the software giant three years ago this month to head up Nokia. He joined Microsoft in January 2008 after serving as COO of Juniper Networks and as an executive at Adobe Systems.
“Building on our successful partnership, we can now bring together the best of Microsoft’s software engineering with the best of Nokia’s product engineering, award-winning design, and global sales, marketing and manufacturing,” Elop said in a statement. “With this combination of talented people, we have the opportunity to accelerate the current momentum and cutting-edge innovation of both our smart devices and mobile phone products.”
Nokia Chairman Risto Siilasmaa will become Nokia’s interim CEO while the company searches for a permanent replacement.
“For Nokia, this is an important moment of reinvention and from a position of financial strength, we can build our next chapter,” Siilasmaa said. “After a thorough assessment of how to maximize shareholder value, including consideration of a variety of alternatives, we believe this transaction is the best path forward for Nokia and its shareholders. Additionally, the deal offers future opportunities for many Nokia employees as part of a company with the strategy, financial resources and determination to succeed in the mobile space.”
Also joining Microsoft as part of the deal are Jo Harlow, Juha Putkiranta, Timo Toikkanen, and Chris Weber.
The acquisition suggests that like Apple, Microsoft believes it needs more direct control of handset manufacturing to succeed in the smartphone market. The relationship between Nokia Microsoft and Nokia began in February 2011 when Elop, who had arrived at the beleaguered Finnish handset maker from Microsoft five months earlier, announced at a developer’s conference that Nokia was adopting Windows Phone 7 as its primary smartphone OS.
At the time, Nokia was lagging far behind in the smartphone wars and was looking to grown beyond its own MeeGo and Symbian platforms. Google’s Android OS was another contender, but Elop promised that a Microsoft partnership would encompass phones, developers, mobile services, partnerships with carriers, and app stores to distribute software.
Two days later at the 2011 Mobile World congress in Barcelona, Spain, Elop expanded on the news with a promise that Nokia would “swing” the market toward Microsoft’s Windows Phone 7 platform. The company’s first phones began to arrive later that year with its first flagship device, the Nokia Lumia 900, arriving almost a year later at the 2012 CES.
Despite market domination by Android and Apple’s iOS, handsets running Microsoft’s Windows Phone operating system seem to be gaining in popularity, especially among new users. Kantar Worldpanel ComTech reported this weekend that at the end of July Windows Phone dominated 8.2 percent of the five major European markets, including the UK, France, and Germany.
As part of the deal, Nokia will grant Microsoft a 10-year non-exclusive licence to its patents, and Microsoft will grant Nokia reciprocal rights to use to its location-based patents.
The deal, which is expected to close in the first quarter of 2014, is still subject to shareholder and regulatory approval. When the deal closes, approximately 32,000 Nokia employees will transfer to Microsoft, including 4,700 in Finland and 18,300 involved in manufacturing.
ne of the appealing features that Windows 8.1 Preview was made available with was support for a wider range of devices, through several new and innovative APIs included in the platform release.
Windows 8 already offered support for various device scenarios, including print, sensors, and geolocation, but provided limited access to arbitrary devices (available only for dedicated device apps).
Windows 8.1 Preview, however, changes that, as support for APIs such as Point of Sale (POS), 3D printing, and scanning is included in the new platform release.
Furthermore, Microsoft explains that the new feature, paired with device protocol APIs, can deliver access to a wide array of new devices.
“Device protocol APIs, new to Windows 8.1, allow a Windows Store app to talk to a device over industry standard protocols like USB, HID, Bluetooth, Bluetooth Smart, and Wi-Fi Direct,” George Roussos, senior program manager, explains in a blog post.
“As a developer, all you need to do is simply identify the device (leveraging metadata) and then open a communication channel to the device. Opening a channel prompts for user consent. This is a critical step to help prevent apps from accidentally or maliciously communicating with one or more devices without the user’s awareness.
“Once access is granted, the app can communicate with a device, including starting long data transfers, which can continue even if the user swipes to another app,” he also notes.
Through the aforementioned access to devices available via device protocol APIs, a series of new scenarios is supported in Windows 8.1 Preview, including IHV Device Access, which enables hardware vendors to come up with new apps for their products without the need of specific drivers.
Manufacturers will also be able to create a standard to allow communication with their devices, which means that developers can build new apps that could communicate with them.
Said access to devices also enables home developers to come up with their own software to communicate with non-standard devices.
Additional info on how to build, test, and deploy such applications can be found in a series of resources on Microsoft’s own website, including:
Using Geolocation and Geofencing in Windows Store pps [3-9034]
3D Printing with Windows [3-9027]
Building Windows Apps That Use Scanners [3-025]
How to Use Point-of-Sale Devices in Your App [3-029]
Apps for Bluetooth, HID, and USB Devices (focusing on Bluetooth RFCOMM) [3-026]
Apps for Bluetooth Smart Devices [3-9028]
Apps for USB Devices [3-924a]
Apps for HID Devices [2-924b]
“Windows 8.1 Preview provides rich support for apps to communicate with devices. By integrating standard devices (e.g. printers, sensors) or even custom devices seamlessly into your apps, users can enjoy a fast and fluid way of interacting with their favorite devices,” George Roussos concludes.
The next “space race” might be the race to develop a synthetic model of the human brain – one that Google and Microsoft will participate in, if a report is true.
And instead of trying to beat the Russians, this time the Americans will be racing against the Europeans, who have already announced their plans.
The New York Times reported Monday that the Obama Administration is close to announcing the Brain Activity Map, which scientists quoted by the paper say could be on the scale of the The Human Genome Project, a $3.8 billion project to map the human genome that, the Times reported, returned $800 billion in jobs and other benefits.
The Brain Activity Map would attempt to document how the brain works, from the tiniest neurons up through how possibly the different regions of the brain communicate with one another. If the project succeeds, the Brain Activity Map might give us an understanding of how the human brain “computes” data through its complex web of neurons. It might also help scientists solve brain-related diseases like Alzheimer’s.
Modelling the human brain, and figuring out how it works, has long been one of the Holy Grails of supercomputing, prompting fears of a “technological singularity,” where successively advanced artificial intelligences design ever more refined versions of themselves, leading to a future where humans become increasingly irrelevant.
On a more realistic scale, learning how people think could allow services to begin anticipating their needs, a problem companies like Google and Microsoft would be interested in solving. The Times reported that a Jan. 17 meeting at CalTech was attended by the National Institutes of Health (NIH), the Defense Advanced Research Projects Agency and National Science Foundation, plus Google, Microsoft, and Qualcomm.
Google representatives did not return an emailed request for comment, possibly because of the U.S. President’s Day holiday. A Microsoft Research representative said that the company declined to comment.
Two of the foundations of the Times report were public statements: a tweet by NIH director Francis S. Collins, and a mention of the efforts to map the brain by President Obama in his State of the Union address:
“Every dollar we invested to map the human genome returned $140 to our economy,” Obama said, according to a transcript of the speech. “Every dollar. Today, our scientists are mapping the human brain to unlock the answers to Alzheimer’s. We’re developing drugs to regenerate damaged organs, devising new materials to make batteries 10 times more powerful. Now is not the time to gut these job-creating investments in science and innovation. Now is the time to reach a level of research and development not seen since the height of the space race. We need to make those investments.”
Collins then tweeted: “Obama mentions the #NIH Brain Activity Map in #SOTU”.
The Other Horses in the Race: the EU
Funding for the U.S. effort could last as long as 10 years, and possibly top $3 billion over that time. But the bar was set earlier by a massive collaboration among more than 80 European research agencies, which won an award from the EU of one billion euros ($1.34 billion) to develop a computer simulation of the human brain, known as The Human Brain Project.
That will partly cover the intriguingly named “Neuropolis,” a building dedicated to ”in silico life science” that will serve, at least in part, as the computer infrastructure behind the effort. The Swiss Confederation, the Rolex Group, and various third-party sponsors are backing this part of the effort.
“The HBP will build new platforms for “neuromorphic computing” and “neurorobotics,” enabling researchers to develop “new computing systems and robots based on the architecture and circuitry of the brain,” according to the The Human Brain Project.
Other Horses: IBM’s/DARPA SYNAPSE
The Defense Advanced Research Projects Agency, responsible for the initial funding and challenges to design self-driving cars and other public-private partnerships, has worked with IBM to develop SYNAPSE (Systems of Neuromorphic Adaptive Plastic Scalable Electronics), whose ultimate goal is to build a “build a cognitive computing architecture with 1010 and 106 synapses” – not a biologically realistic simulation of the human brain, but one where computation (“neurons”), memory (“synapses”), and communication (“axons,” “dendrites”) are mathematically abstracted away from biological detail.
A Network of Neurosynaptic Cores Derived from Long-distance Wiring in the Monkey Brain: Each brain-inspired region is symbolically represented by a picture of IBM’s SyNAPSE Phase 1 neuro-synaptic core. Arcs are colored gold to symbolize wiring on a chip. (Source: Dharmendra S Modha)
Using 96 Blue Gene/Q racks at the Lawrence Livermore National Laboratory, the most powerful supercomputer in the world, the team achieved 2.084 billion neurosynaptic cores containing 5310 neurons and 1.37×1014 synapses, according to the blog of Dharmendra Mohda, the leader of IBM’s Cognitive Computing division. That’s only 1,542 times slower than real time.
IBM assembled its diagram of the interconnections inside the cerebral cortex of the macaque, a small monkey, as an early model of how the brain works.
IBM’s Watson, of course, is another example of how a computer can interact with humans, absorbing the reams of unstructured data and winning Jeopardy, among other things.
Google itself last year sat down to try and develop its own neural network, and then presented it with data from its own network. The result, as was somewhat widely publicized, was that the network ended up constructing an internal image of a cat, and then spent its computational efforts deciding which YouTube videos were and were not cats.
So how could Google or Microsoft benefit from a federal partnership? On the surface, they might receive federal funding for research. Cognitive computing on the order of what IBM is hoping to achieve, for example, can take millions and millions of dollars, even if the computing resources are already available. (The Times reported that the CalTech meeting was designed to determine if sufficient computing resources were indeed available; the answer is yes, the paper reported.)
Thinking the way that humans think would allow Google or Microsoft to anticipate even more what their users want, and to provide them with that data. Both companies can do that to some extent through data accumulated from millions of users; if the most common “t” word I search for is Twitter.com, Google can start pre-loading the page in the background. But thinking like a human thinks, and making the seemingly random associations that humans make thousands of times faster than we make, could mean everything from artificially-crafted memes to pre-processed sound bites for politicians.
Version 7.0 of DotNetNuke, an open-source content-management system that you’ve probably never heard of, is now released and bringing enterprise-level web content functionality to users committed to Microsoft-based infrastructure. The .NET-based DotNetNuke will be a significant player in a growing cloud-computing environment where Microsoft Web servers may be more relevant.
Web Servers: Where Microsoft Doesn’t Rule
Depending on how you examine the data, less than one-fifth of the world’s sites run Microsoft-based Web servers like Internet Information Server. And, unlike all the cool kids running open-source code like Apache and nginx, IIS players don’t always want to run popular content-management systems like Joomla, Drupal or WordPress.
[Update: The preceding paragraph was corrected to update an error regarding the capabilities of CMS systems and IIS. -BKP]
Let’s be honest: 16.52% of the world’s tracked Web servers running IIS in November 2012 is tiny compared to Apache’s 57.23% share. But having almost 17% of servers locked up is still a heck of lot of sites — 103.3 million, actually. Even if just 1% of them need a WordPress-like CMS, that’s a little over a million sites pining away.
It’s all very well to snicker at these shops and prescribe spinning up some Apache-on-Linux servers, and installing Joomla or one of the other CMSes. But CIOs make IT investments for a strategic reason and have put a lot of time and money into their infrastructure. Generally, they’re not just rolling dice, which means it’s not always easy to get them to shift to non-Microsoft technologies.
Enter DotNetNuke Corp., maker of its self-named .NET-based platform. The CMS plays very well with native Microsoft tech and provides CIOs a robust tool that compares favorably with Drupal. Since its initial release in late 2002, DotNetNuke has enjoyed a strong following within the Microsoft ecosystem, and has about 800,000 registered users, according to Shaun Walker, co-founder and CTO of the company.
Filling The .NET Gap
The latest iteration of DotNetNuke has a broad range of new features, with a new interface that includes a more-robust editor and version-management system, as well as Active Directory support so enterprise employees can plug into site-content systems seamlessly. Cascading-style-sheet management is reportedly a lot easier to use, which should make designers happy.
DotNetNuke is a bit of an oddity within the Microsoft world. It’s actually an open-source licensed platform, using an MIT software license. The MIT license is what’s known as a permissive license, which means the code for the software is open but users and developers aren’t required to publish their changes, as with restrictive licenses like the GNU General Public License. Walker highlighted this as one reason why Microsoft-oriented customers don’t have a problem with using an open-source platform.
That DotNetNuke’s potential market is such a small minority of servers in the world might seem like a liability, but Walker believes that there is a lot of potential for DotNetNuke just around the corner. With the advent of HTML5 and Java-based sites, “pretty soon the underlying architecture won’t matter as much.”
If development does shift more to the client-side layer, then the Web server layer where Apache, nginx, and IIS live would become more of an abstraction. Given the relatively low cost of cloud-based instances of even IIS, companies with more .Net assets and developers might therefore migrate to IIS in order to streamline their IT resources.
That’s the vision Walker has, but it remains to be seen if IIS can experience strong growth, even in the cloud, up against the free Apache and nginx servers.
For now, DotNetNuke soldiers on, filling a gap for IT managers who are still dedicated to the Microsoft Way.
Microsoft appears to be working on an augmented reality headset of its own, similar to Google’s Project Glass.
In a new patent application, the company describes a glasses-based system that overlays information onto the user’s view.
Unlike the Google version, though, it’s envisioned as something you’d wear specifically for live events rather than all day every day – at a baseball game, for example, where scores and other information could be displayed.
The glasses could be dished out to spectators at the beginning of an event, in much the same way as 3D glasses are at the movies today.
“A user wearing an at least partially see-through, head mounted display views the live event while simultaneously receiving information on objects, including people, within the user’s field of view, while wearing the head mounted display,” reads the application.
“The information is presented in a position in the head mounted display which does not interfere with the user’s enjoyment of the live event.”
Eye tracking would be used to work out where the user’s looking, and GPS to work out precisely where they are, and the data tailored accordingly.
While the patent application doesn’t mention the Xbox, the system looks an awful lot like the AR glasses leaked this summer as part of an internal Microsoft presentation on the future of the Xbox.
And as an eagle-eyed Geekwire writer noticed, one of its two inventors is Kathryn Stone Perez, executive producer on the Xbox incubation team.
Windows 8 is almost here, but despite Microsoft’s best efforts, there just aren’t that many Windows 8-style apps available yet. To kick-start the Windows 8 development community, Microsoft today announced that it is hosting a global hackathon in over 60 cities from November 9 to 11. Registration for the event is now open.
The hackathon, which Microsoft decided to call “Wowzapp 2012,” is mostly geared toward students, but a Microsoft spokesperson told me that it is open to all developers.
Microsoft will provide all participants with the necessary tools to build their apps, including Visual Studio 2012 Express (which, just like Visual Studio Professional, is free for students through Microsoft’s DreamSpark program). At the event, Microsoft app experts, developers and trainers will be on hand to help the participants develop their apps (or put the finishing touches on their existing apps). In addition to this help, participants will also receive a Windows Store registration code so they can submit their apps for to the Store.
“Windows 8 represents a prime opportunity for students to gain practical experience as developers and potentially earn money through app downloads in the Windows Store, before even graduating from college,” says Microsoft. “Whether a student wants to offer their application for free or make money from paid apps or advertising, the Windows Store provides the flexibility to do so.”
In addition to this program, Microsoft is also running Generation App and other initiatives to motivate developers to write apps for Windows 8. Just last month, Microsoft also hosted Appfest in Bangalore, India, the world’s largest non-stop coding marathon, where over 2,500 developers write Windows 8
The Linux Foundation has proposed a solution for the current conundrum Linux is facing, with the introduction of Secure boot specification for UEFI.
UEFI, Unified Extensible Firmware Interface, or as the Linux community calls it “The Secret Plan of Microsoft to Take Over the World” (cue evil laughter), is thought more as a necessary evil.
Unfortunately, the implementation of Secure boot has proven to hinder the development of Linux distributions. Secure boot can prevent the loading of an operating system that is not signed with an acceptable digital signature.
The Linux Foundation has found a solution to this problem, as explained by James Bottomley, from Linux Foundation Technical Advisory Board.
“The Linux Foundation will obtain a Microsoft Key and sign a small pre-bootloader which will, in turn, chain load (without any form of signature check) a predesignated boot loader which will, in turn, boot Linux (or any other operating system),” said Bottomley.
The pre-bootloader has a few protections in place, insuring that it cannot be used as a vector for any type of UEFI malware to target secure systems.
This pre-bootloader can be used either to boot a CD/DVD installer or LiveCD distribution or even boot an installed operating system, in secure mode, for any distribution that chooses to use it.
Microsoft has yet to provide a signature, but The Linux Foundations says it is just a matter of time. The pre-bootloader will be available to download from their website.
James Bottomley also provided some technical details about the project. “The real bootloader must be installed on the same partition as the pre-bootloader with the known path loader.efi (although the binary may be any bootloader including Grub2). The pre-bootloader will attempt to execute this binary and, if that succeeds, the system will boot normally,” stated The Linux Foundation representative.
More information about the pre-bootloader will be made available once The Linux Foundation obtains the Microsoft key.
Google Search :)
- April 2014
- March 2014
- February 2014
- January 2014
- December 2013
- November 2013
- October 2013
- September 2013
- August 2013
- July 2013
- June 2013
- May 2013
- April 2013
- March 2013
- February 2013
- January 2013
- December 2012
- November 2012
- October 2012
- September 2012
- August 2012
- July 2012
- June 2012
- May 2012
- April 2012
- March 2012
- February 2012
- January 2012
- December 2011
- November 2011
- October 2011
- September 2011
- August 2011
- July 2011
- June 2011
- May 2011
- April 2011
- March 2011
- February 2011
- January 2011
- December 2010
- November 2010
- October 2010
- September 2010
- August 2010
- July 2010
- June 2010
- May 2010
- April 2010
- March 2010
- Scarcity triggers new restrictions on obtaining Internet addresses
- Kecepatan Internet Indonesia di Asia Tenggara Peringkat Tiga Terbawah
- Google Launches Chrome Remote Desktop On Android, Allowing Mobile Access To Your PC
- SQL Server 2014 Key Features and Enhancements
- How WhatsApp Grew to Nearly 500 Million Users, 11,000 cores, and 70 Million Messages a Second
- Hacking ATM Machines with Just a Text Message
- Global Impact Competition 2014 : Kompetisi ide inovasi teknologi ini menawarkan hadiah program studi di Amerika Serikat senilai US$ 30.000 secara gratis
- Android: Fast Communication with .NET Using Protocol Buffers
- SEO Strategies for Designers
- 10 Programming Languages You Should Learn in 2014