As cloud computing services become ever more popular, you might begin to wonder how much you can really trust them to perform when you need them? I decided to find out – by testing the top file-transfer/file-storage/file-backup services.
In many ways, getting a file from one computer to multiple computers is the most challenging task for the cloud. And because I like to use multiple computers running multiple operating systems, including Linux, Windows and the Mac, that function is particularly important to me.
Cloud Services Can Lag
I am pretty agnostic when it comes to cloud providers – as long as they are free or close to it. However, as I was moving files around while preparing my most recent book A Week at the Beach The 2013 Emerald Isle Travel Guide I was a little surprised at the lags I sometimes experienced using the big-name cloud-based file-transfer services.
More than once when I wanted to use a file from one computer to another, I was disappointed by my cloud services. There were a few times that I got so tired of waiting for a file to show up on my other computer’s cloud drive that I resorted to sneakernet using a USB thumb drive.
After my book was published, I decided to go back and run some simple tests to see just how long the four best-known file-transfer/backup services actually take to put the files where you want them.
To compare Dropbox, Google Drive, Amazon Cloud, and Microsoft’s SkyDrive I started by exporting a 500K JPEG test image from Lightroom on my Windows 8 computer directly to each of the four services.
Fighting The Randomization Factor
After running the tests a few times, I noticed what can only be described as random operating system differences. Sometimes the file would pop up first on my Mac and other times it showed up first on my Windows 7 laptop.
In order to eliminate the operating system differences, I restarted the tests and this time stopped the timer when the file showed up on either my Mac running Mountain Lion or my Windows 7 laptop. I also reran my tests with a variety of sizes and types of files. In all I ran twenty-five sets of tests.
The differences were significant, if not overwhelmingly huge. The fastest synchs took less than 3 seconds, while a few others took several minutes. The biggest chunk of tests clocked in between 10 seconds and one minute. A few synchs never completed. But which service recorded the best times with the fewest problems?
Dropbox ended up being fastest 56% of the time. Even more importantly, it was slowest only 4% of the time.
Skydrive brought up the rear. It was fastest on 12% of the tests, but but slowest on a whopping 80% of the tests. It also had two files that never showed up on the Mac and one that never showed on the Windows 7 laptop.
The Amazon Cloud slightly outpaced Google Drive – which had one file that never showed up on the Mac and another that took a very long time to complete.
If my tests convinced me of anything, it is that Skydrive is a work in progress and has a long way to go. I even had trouble setting up the tests on Skydrive.
My tests also revealed a number of odd results. When testing files saved from Word, strange extra files sometimes showed up on all the cloud drives except Dropbox. The file names always began with the characters “~$”. Sometimes the mystery files disappeared and sometimes they hung around.
Cloud Drive Recommendations
So here are some quick recommendations:
- First, do not treat your cloud drive as one huge dumping ground. Create folders and try to force a little organization on yourself.
- If you save a file to the cloud in order to work on it from another computer, quit the application or close the file on the first computer after you have saved the file to the cloud drive.
- Make sure you have a local copy of important files in your documents folder – not just the replicated cloud folder on your computer. Interesting things sometimes happen when cloud files get updated or deleted from another computer. When you come back to the computer where you first created a file, you could be in for a nasty surprise.
- If you cannot get a cloud folder on your computer to update, trying quitting the cloud application or rebooting your system.
Dropbox and Amazon appear to be the most reliable solutions with only occasional delays. Google isn’t far behind, and I can’t imagine that Microsoft won’t work hard to improve Skydrive – the company’s subscription model depends on it.
While both Google and SAP shared a 1980′s music sensibility at their respective conferences this week – Billy Idol performed at Google I/O and U2′s Bono walked the floor at SAPPHIRE – the two companies see the future of computing very differently. Even when the two companies agree on the importance of cloud computing, their strategies couldn’t be more different.
For one thing, SAP’s new cloud isn’t even a cloud. But then, SAP’s Bono wasn’t really Bono, either, but merely an impersonator.
Forrester analyst Stefan Ried takes SAP to task for getting cloud wrong in its new HANA Enterprise Cloud:
“The Hana Cloud is a very careful move to a new business model. It is not disruptive and will NOT accelerate Hana usage to the many more customers who have been struggling with Hana on-premises because of its licensing.
“The announced Hana Enterprise Cloud follows the ‘Bring Your Own License’ paradigm. While this is great for customers that already have a Hana license and would like to relocate it into the cloud, it is useless for customers that might have largely fluctuating data volumes or user numbers and might specifically use a cloud because of its elastic business model.”
In other words, it’s not really a cloud.
Amazon, more than any other cloud vendor, has insisted that such “clouds” don’t deserve the name, as they fail to live up to the very premise of cloud computing: truly elastic, on-demand software. But while Amazon normally reserves its ire for private cloud vendors, SAP’s HANA Cloud is even less of a cloud because it requires you to bring your own HANA license to the party.
Meanwhile, over at Google I/O, Google introduced improves to Google Cloud Platform and made Google Compute Engine available to all. Like Amazon, Google is making a powerful array of infrastructure technologies available on-demand, and totally elastic.
Google, like Amazon, realizes that the future of computing is not going to be won by the vendor with the prettiest device or even the best user interface: it will be won by the company with the best cloud services. As Redmonk analyst Stephen O’Grady pointed out, summarizing Google’s first day announcements:
“[Google is clearly telegraphing that] the war for mobile will not be won with devices or operating systems. It will be won instead with services.”
SAP must see this, too, but appears hamstrung by its past, in true “Innovator’s Dilemma” fashion. It has so much revenue tied up in legacy deployments of legacy software that even releasing a kind-of, sort-of, not-really cloud offering is the best it can do.
This is not to suggest that HANA is bad technology. By most accounts, it’s quite good. But as Ried argues, “The SAP Hana Enterprise Cloud is version 2 of the initial Hana in-memory database, but the cloud offering based on ‘Bring Your Own License’ is more version 0.1 of a cloud business model.”
Which is to say, it’s no cloud at all. While this may not seem like a big deal, enterprises are barreling into true clouds for a wide variety of needs, and no longer merely for development and test workloads. If SAP wants to participate in the future of enterprise computing, it should learn from the companies that are inventing that future: Google and Amazon.
BlackBerry(R) Messenger (BBM(TM)), available to iOS(R) and Android(TM) users this summer, with support planned for iOS6, and Android 4.0 (Ice Cream Sandwich) or higher, all subject to approval by the Apple App Store and Google Play. BBM sets the standard for mobile instant messaging with a fast, reliable, engaging experience that includes delivered and read statuses, and personalized profiles and avatars. Upon release, BBM customers would be able to broaden their connections to include friends, family and colleagues on other mobile platforms.
In the planned initial release, iOS and Android users would be able to experience the immediacy of BBM chats, including multi-person chats, as well as the ability to share photos and voice notes, and engage in BBM Groups, which allows BBM customers to create groups of up to 30 people.
“For BlackBerry, messaging and collaboration are inseparable from the mobile experience, and the time is definitely right for BBM to become a multi-platform mobile service. BBM has always been one of the most engaging services for BlackBerry customers, enabling them to easily connect while maintaining a valued level of personal privacy. We’re excited to offer iOS and Android users the possibility to join the BBM community,” said Andrew Bocking, Executive Vice President, Software Product Management and Ecosystem, at BlackBerry.
BBM is loved by customers for its “D” and “R” statuses, which show up in chats to let people know with certainty that their message has been delivered and read. It provides customers with a high level of control and privacy over who they add to their contact list and how they engage with them, as invites are two-way opt-in. iOS and Android users would be able to add their contacts through PIN, email, SMS or QR code scan, regardless of platform. Android users would also be able to connect using a compatible NFC-capable device.
BBM has more than 60 million monthly active customers, with more than 51 million people using BBM an average of 90 minutes per day. BBM customers collectively send and receive more than 10 billion messages each day, nearly twice as many messages per user per day as compared to other mobile messaging apps. Almost half of BBM messages are read within 20 seconds of being received; indicating how truly engaged BBM customers are.
Today, BlackBerry also announced BBM Channels, a new social engagement platform within BBM that will allow customers to connect with the businesses, brands, celebrities and groups they are passionate about. BlackBerry plans to add support for BBM Channels as well as voice and video chatting for iOS and Android later this year, subject to approval by the Apple App Store and Google Play.
If approved by Apple and Google, the BBM app will be available as a free download in the Apple(R) App Store(SM) and Google Play store. Additional details about system requirements and availability will be announced closer to the launch.
In the dark old days of the late 1990s and early 2000s, debates would rage about whether open source software is as good as proprietary software. And it was all a matter of opinion.
Then, in 2006, the Department of Homeland Security partnered with a software code analysis company called Coverity to examine open source code for security vulnerabilities and software defects. Each year since, Coverity has published a report on the quality of open source code, and each year, the company has found that it isn’t that different from proprietary software. That seemed to settle the issue.
But the latest report, published on Wednesday, found something new: the code quality of open source projects tends to suffer when they surpass 1 million lines of code, whereas proprietary code bases continue improve when they pass that mark.
The Coverity Scan tool performs automated static analysis of code bases, looking for defects such as resource leaks, illegal memory access, and control flow issues. It’s free for open source projects and available to proprietary software vendors for a fee. Coverity drew on its user base for the report, analyzing 118 active open source projects and 250 proprietary projects.
The study found that open source projects have an average of .69 defects per 1,000 lines of code, while proprietary projects have about .68 defects per 1,000 lines. But when projects were compared based on the total number of lines, some intriguing differences emerged.
Open source projects with 500,000 to 1 million lines of code had, on average, .44 defects per 1,000 lines of code. Proprietary projects in the same range had .98. But opens source projects with over one million lines of code had .75 defects per 1,000 lines. Proprietary projects in the same range had only .66.
The report speculates that the reason for the discrepancy is that when open source projects are young, they’re developed by a small group of dedicated volunteers. As the project grows and new developers start contributing code, it becomes harder to manage. But on proprietary projects, the process is initially haphazard, but becomes more rigorous once the project grows.
“But this doesn’t mean that the quality of the codebase suffers,” the report cautions. “These are typically projects that are heavily adopted in the industry, have the backing and support of a commercial company and still have above average software quality.”
But it’s an important issue as open source projects continue to grow. Only 13 projects were over the 1 million mark, but the average size of the open source projects Coverity analyzed was 580,000 lines in 2012, up from 425,179 in 2008. In fact, the report suggests it’s this growth that made the average defect density increase from .45 in 2011 to .69 in 2012.
Data center computing demand grew 63% in 2012, requiring enterprises and data center operators to build new facilities to accommodate the market.
Although some data centers may seem to be placed at random, selecting a data center location is a little more strategic than throwing darts at a map. In fact, there are many different factors that affect this decision-making process. The goal is to ensure that new facilities address local service demand efficiently and sustainably while still making a profit (or meeting corporate needs). Whether the plan is to build a new facility or retrofit an existing building, a number of factors must be considered to ensure that the potential $1 billion investment will deliver strong returns, including type of service, proximity to end users, potential for disaster and climate.
While all of these factors come into play for data services and hosting providers, many of them are also concerns for enterprises building their own data centers for corporate use.
Is There Enough Demand?
Typically, the first step in hosting companies choosing where to put a data center is talking to customers, prospects and partners to determine where companies are looking for hosting support. This may seem obvious, but for providers to ensure the success of a new data center, they must fill a substantial portion of the facility before the doors even open to guarantee profits during the first month.
If a provider can’t fill enough of a new facility, one way to reduce the financial risk is to use a modular approach. Many companies, including Dell and IBM, use this approach to accommodate growing data center demand quickly, as it allows for the gradual buildout of infrastructure. Additionally, going modular means that providers don’t have to dedicate resources to power and cool unused aisles and racks – they just pay for what’s being used.
Who Needs These Services?
In addition to demand, the location must also match the services to be offered. For example, if a provider is receiving numerous requests for co-located trading equipment on Wall Street, then a co-location facility as close to Wall Street as possible – ideally on the same block – will best serve the demand.
Alternatively, if a provider is predominantly seeing non-latency-sensitive demand across the greater New York area, it can build a hosting facility anywhere nearby. Building a data center on Long Island would cost much less in rent and utilities, and would still be able to meet this market’s hosting needs.
How Likely Are Natural Disasters?
Another element that plays a role in the decision-making process is the number and severity of natural disasters common to the region. For instance, areas prone to tornados, flooding or hurricanes raise a red flag because they could knock out power and damage the facilities. Similarly, operators may shy away from building new data centers along turbulent coastlines and instead look at real estate further inland to avoid water or salt damage.
Hosting providers haven’t always considered volatile weather a determining factor, though. For instance, despite the likelihood of hurricanes and tornados in areas like the Southeast and Midwest, both these areas have a high concentration of data center facilities. But as data center infrastructure and functionality rise in importance, this factor can no longer be ignored – especially since data center outages can cost an average of $5,600 per minute – that’s $336,000 per hour!
What About Free Cooling?
Average temperature is also very important to keep in mind when choosing where to construct a new data center, as it can greatly influence utility costs. Power accounts for an estimaed 50% of data center operation costs, which is why many operators choose temperate environments that won’t add to the heat generated by servers. In a milder climate, operators can also take advantage of “free cooling,” such as open-air cooling, to further cut cooling costs. Large companies like Facebook, Amazon and Apple have been opening data centers to the Northwest region of the U.S. to take advantage of the area’s cool climate and potential for free cooling.
Selecting a new data center location follows complex formulas that may not result in the same outcome for every every hosting operator or enterprise.
Every company building a data center will prioritize different goals and concerns. For example, data centers located in Los Angeles face big bills when it comes to climate control, while New York City data centers must grapple with extremely high rent. It’s all a matter of finding an location that offers the greatest number of benefits – without costing a fortune.
Banyak orang bekerja keras menjalani transisi dari programmer menuju tingkat entrepreneur. Meskipun benar seorang programmer adalah seorang manusia yang kecerdasannya tak diragukan lagi, akan tetapi terjun dalam dunia entrepreneurship membutuhkan kecerdasaan dalam bidang yang relatif berbeda. Sehingga bisa jadi seorang programmer hebat gagal berkali-kali dalam mendirikan sebuah bisnis, dan seorang programmer biasa bisa lebih berhasil dari programmer yang lebih cemerlang. Itu karena entrepreneurship bukan hanya membutuhkan ketrampilan koding dan teknis lainnya. Kompleksitas ketrampilan dan wawasan yang dibutuhkan oleh seorang entrepreneur sangat tinggi.
Sederet poin yang bisa programmer terapkan dalam masa transisi menjadi seorang entrepreneur ialah sebagai berikut:
Koding hanya 5% dari keseluruhan bisnis Anda
Salah satu masalah terbesar yang bisa kita temui ialah para developer ini sering terjebak dalam dunia koding. Menghabiskan jam-jam kerja yang begitu panjang unutk menyempurnakan sebuah fungsi di web atau membangun fitur yang menunjukkan teknologi terbaru. Kini Anda harus menulis kode itu agar bisa masuk dalam bisnis software. Pastinya kode itu harus berkualitas tinggi yang tidak terisi dengan bug atau sesuatu yang kurang aman. Namun, kode terbaik di dunia sama sekali tak berarti jika tak seorang pun mengetahui produk Anda. Kode tak berguna jika saatnya harus membayar pajak dan Anda bisa dijebloskan ke penjara jika menghindari pajak. Kode tak berarti jika Anda dituntut karena tuduhan menggunakan software palsu tanpa lisensi yang sah secara hukum.
Di banyak forum dan kesempatan, kita menjumpai banyak entrepreneur dengan latar belakang programmer yang terlalu sibuk memperbincangkan masalah koding sehingga mereka lupa dengan aspek bisnis lainnya yang juga tak kalah penting. Tentunya itu akan lebih sukar daripada membahas koding tetapi tak seorang pun mengatakan itu akan mudah!
Desain adalah segalanya, tergantung pesaingnya
Produk Anda harus didesain dengan baik. Programmer biasa dengan latar belakang yang kurang kompeten tak bisa menyelesaikannya. Ingatlah bahwa desain Anda hanya perlu menjadi terlihat lebih baik dari pesaing. Sehingga jika Anda membangun sebuah sistem TI kantor, tak diperlukan desain yang sangat canggih dan rumit. Tentu saja bagus jika Anda bisa melakukannya tapi tujuan utama di sini ialah untuk memperjelas pada pelanggan bahwa Anda memiliki desain yang lebih baik saat mereka membandingkan produk Anda dengan para pesaing. Orang akan menghakimi Anda dengan beradasrkan apa yang mereka saksikan secara langsung.
Membiasakan diri dengan pemikiran jangka panjang
Tidak ada yang lebih disukai oleh programmer daripada membalik kode dengan cepat. Mendeteksi bugs dalam program dan memberantasnya. Masalahnya ialah bahwa pekerjaan yang paling erat dengan tugas selain programming dalam sebuah ISV (Independent Software Vendor) tidak serta merta selesai. Anda benar-benar membutuhkan pemikiran jangka panjang. Hal-hal seperti mendapatkan pasar dan penempatan produk yang pas bisa membutuhkan waktu dari berbulan-bulan hingga tahunan. Tak ada hasil instan seperti yang Anda bisa dapatkan dari menulis kode sehingga Anda harus selalu memaksa diri sendiri untuk berpikir dalam kerangka jangka panjang. Kini, cobalah untuk bertanya pada diri sendiri, dalam waktu 6 bulan dari sekarang, seperti apakah Anda akan mengarahkan produk, pemasaran dan penjualan dalam bisnis Anda?
Akui bahwa Anda tak paham mengenai pengguna akhir
Ada kemungkinan software yang Anda buat ialah software yang digunakan untuk bidang lain yang sama sekali Anda tak pernah selami. Inilah celah peluang dan Anda bisa memanfaatkan ini tetapi Anda harus sadari bahwa Anda harus melakukan lebih banyak dari sekadar melakukan penelitian pasar. Anda harus memahami pelanggan yang sebenarnya. Bicaralah dengan mereka. Meski Anda tak mau dan segan, paksakan diri untuk melakukannya. Tanpa berbicara pada pengguna akhir yang sebenarnya, Anda tak akan pernah mengetahui fitur apakah yang sudah membuang waktu Anda dan fitur lain apa yang belum dimiliki tetapi memiliki arti penting.
Kesalahan utama yang dilakukan programmer biasanya mengimplementasikan sejumlah fitur yang digunakan juga oleh pesaing saat memulai. Itu strategi yang buruk. Sama saja dengan mencontek pekerjaan teman Anda. Anda berdua bisa melakukan kesalahan yang sama. Dengan berbicara pada pelanggan, Anda bisa menghindari kesalahan yang dilakukan pesaing dan membuat fitur Anda lebih sempurna.
Cintai pelanggan Anda
Banyak pengembang software berasal dari lingkungan kantor dengan latar belakang teknologi informasi. Dalam sebagian besar perusahaan TI, umumnya terdapat rasa meremehkan yang ditujukan pada pelanggan yang berasal dari dalam kantor. Tidak mengherankan karena TI sering diminta untuk melakukan lebih banyak pekerjaan atau sebaliknya.
Kini waktunya untuk menyingkirkan semua itu. Kini banyak ISV yang tampaknya melakukannya dan tidak ada ruang untuk software komersial di dalamnya. Cara satu-satunya untuk menjadi sukses ialah dengan mencintai pelanggan Anda. Itu berarti memenuhi kebutuhan mereka sebaik mungkin dan melakukan yang terbaik untuk itu. Saat Anda tak bisa menyatakan alasannya. Saat mereka memilih produk pesaing, hormatilah konsumen dan ingatkan mereka untuk menemui Anda kembali jika ternyata produk pesaing tak sanggup memenuhi kebutuhan mereka. Dengan bersikap ramah dan baik pada pelanggan saat mereka memutuskan untuk tidak membeli, peluang untuk membuat mereka kembali ke bisnis kita juga akan bertambah tinggi.
Ingatlah untuk mendesain agar mudah digunakan pelanggan. Bahkan pengguna yang berpengetahun lebih baik menyukai kesederhanaan
Antarmuka pengguna Anda jangan menjadi ajang pamer teknologi terbaru. Upayakan untuk tetap sederhana. Pengguna yang lebih familiar dengan dunia TI pun sama-sama menyukai kesederhanaan seperti halnya pengguna awam. Alasan terpenting untuk dimiliki ialah kesederhanaan bagi pengguna yang sedang mencoba. Pengguna trial ini hanya akan memberikan sedikit waktu mereka untuk Anda. Jika Anda membuang waktu dengan membuat mereka berpikir sat menjelajahi antarmuka pengguna yang canggih namun pelik, bisa dimaklumi jika mereka beralih ke penyedia layanan lain yang lebih mudah dipahami produknya.
Ingatlah untuk mengajukan ide Anda pada mereka yang tak terlibat dalam proyek yang sama dengan Anda
Pastikan Anda selalu menyisihkan waktu untuk menunjukkan perkembangan produk terbaru Anda pada seseorang yang buka bagian dari tim Anda. Perspektif segar sering akan menemukan kesalahan dalam antarmuka yang Anda miliki. Bahkan jika orang tersebut tak mengetahui banyak hal dalam bidang Anda, Anda akan terkejut saat menemukan berapa banyak masalah yang mereka bisa temukan namun Anda lewatkan.
Jangan segan untuk menyingkirkan yang tak diperlukan
Tak ada hal yang lebih menjengkelkan daripada membuang hasil kerja yang sudah demikian susah payah diselesaikan. Namun, dalam sebagian kasus, Anda sebagai programmer juga perlu mengetahui bahwa membuang sebagian kode yang menurut Anda begitu bagus dan sempurna sangat diperlukan apalagi. Idealnya Anda bisa menemukannya sebelum dikirimkan pada konsumen. Saat Anda menemukan fitur-fitur ini, Anda perlu mengenyahkan kode atau fitur itu sebelum menimbulkan masalah.
Kesabaran adalah senjata utama
Jumlah waktu yang diperlukan tak akan pernah cukup untuk menyelesaikan setiap masalah yang harus Anda selesaikan. Biasanya yang bisa diselesaikan dalam hitungan minggu menjadi bulanan. Cobalah untuk menghayati arti penting kesabaran. Ini penting untuk dapat menjalankan proses yang ada. Hindarilah menetapkan tanggal atau harapan yang terlalu bombastis pada pelanggan jika memungkinkan.
Bekerjalah seperti saat Anda belajar programming
Ingatlah saat Anda pertama kali belajar programming dan Anda harus membaca setiap buku yang diberikan dosen atau guru. Anda harus menerapkan tekad yang sama dalam menjalani proses sebagai entrepreneur. Bacalah semua hal yang berkenaan dengan pasar sasaran, menjalankan bisnis kecil, pemasaran, manajemen secara umum. manajemen waktu, dan sebagainya. Idealnya Anda harus membaca sebelum memulai melakukan koding. Kesalahan yang mampu dihindari dengan melakukan hal itu perlu dipecahkan dengan memberikan komitmen waktu tersendiri
Into an air of great anticipation, Eric Schmidt and Jared Cohen have published The New Digital Age. (Sad to say, my publisher never placed full-page ads in the New York Times.) The book immediately shot to the top of the charts and justly so. The authors are as smart and plugged-in as it gets. And they have the resources and connections necessary to break new ground.
The result is a book full of fresh thinking, tightly researched examples and creative twists that are bound to get the digerati buzzing and cause regular people to reflect deeply about our future.
The book takes an old idea — that there are both digital and physical worlds — and extends it, arguing that today nothing less than two civilizations have arrived. One developed over thousands of years and the other is in its infancy. One is a world of old cultures, nation states, governments, institutions, power structures and laws. The other is a dynamic, ungoverned, even anarchistic world where boundaries are porous, rules unclear and where power is resilient and distributed. While these two co-exist, each restraining the negative aspects of the other, they increasingly come into conflict.
In the next 10 years, the number of people using the Internet will grow from 2 billion to 7 billion. We should prepare ourselves for massive disruption.
As Google executives, it would surely cause them and the company grief to take opinions on all the controversial issues involved. So the authors have chosen to predict the future rather than polemicizing about how to achieve it. The upshot is a book packed with predictions on issues such as the future of states, revolution, terrorism, conflict, combat, citizenship and identity. Cleverly these predictions contain many veiled or not-so-veiled opinions about what is to be done.
Familiar concepts and language of the old civilization are extrapolated to the new – producing fresh and often startling concepts that will cause the most diehard digerati to reflect deeply, yet still be accessible to anyone who cares about the future.
You might expect two Google executives paint a rosy picture. Instead we’re treated to a future that is dizzying and deeply disturbing. Get ready for:
- Virtual honor killings. Identity, a citizen’s most valuable asset, will exist primarily online. In deeply conservative societies where social shame can be devastating, we could see a kind of “virtual honor killing” — dedicated efforts to ruin a person’s online identity, with material real or fabricated. In some cultures this might incent a young woman’s family to kill her.
- Man-in-the-middle attacks. When an eavesdropper steps in to a two-way communication and intercepts the messages in both directions and modifies the content to manipulate the conversation in a way that each party thinks they are communicating directly with the other.
- Balkanization. Imagine if a country or even a group of deeply religious Sunni-majority countries — say Saudi Arabia, Yemen, Algeria and Mauritania — decided to build a “Sunni Web.” While still part of the larger Internet, it would become the main source of information, news, history and activity for citizens living in these countries. Their Web would be constrained and limited to a narrow point of view.
- A decline in confirmation bias. When people, consciously or otherwise, pay more attention to sources of information that confirm or reinforce their existing worldview. Promisingly, a recent Ohio State University study suggests that this effect is weaker than perceived, at least in the American political landscape.
- Camera drones. Consider a society deeply concerned with privacy saturated with camera-equipped smart phones and inexpensive camera drones. We will need designated “safe zones” where photography requires a subject’s consent.
- Internet asylum seekers. A dissident who can’t live freely under one country’s autocratic Internet and is refused access to other states’ Internets will choose to seek physical asylum in another country to gain virtually an unimpeded freedom on its Internet.
- Virtual multilateralism. Authoritarian states like Belarus, Eritrea, Zimbabwe and North Korea — outcasts all — would benefit from by joining an autocratic cyber union, where censorship, monitoring strategies and technologies could be shared.
- Virtual sovereignty and statehood. Hounded in both the physical and virtual worlds, groups that lack formal statehood may choose to emulate it online. This opportunity to establish sovereignty virtually may well be a meaningful step to actual statehood. The Kurdish populations in Iran, Turkey, Syria and Iraq might build a Kurdish web as a way to carve out a sort of virtual independence.
- Discretionary power. With organizations such as WikiLeaks and the many WikiLeaks wannabes that will surely spring up, who gets to decide what material is suitable for release, and what must be redacted, even temporarily? And what happens if the person making these decisions is willing to accept the collateral damage of innocent individuals?
- Data permanence. What is Tweeted, blogged, or written on someone’s Facebook wall can never fully be stricken. This data permanence is an intractable challenge, but the type of political system and level of government control will determine its impact. In an open democracy, it will be a free-for-all in the short term. In a world with no delete button, peer-to-peer networking will become the default mode of operation for anyone looking to operate under or off the radar.
- Cyber terrorism. Terrorist groups and states will make use of cyber-war tactics, though government will focus on information-gathering than outright destruction. Stealing trade secrets, accessing classified information, infiltrating government systems, disseminating misinformation — traditional intelligence agency ploys — will make up the bulk of cyber-attacks between states.
- Virtual statecraft. States will be wistful for the simpler days of foreign and domestic policy. Power in the physical world is no assurance of power in the digital world. This disparity presents opportunities for small states looking to punch above their weight, and would-be states with lots of courage. Countries will have to navigate through the contradictions that may exist between another nation’s physical and virtual foreign and domestic policies.
- Transnational revolution. Future revolutionary movements will be more transnational and inclusive than many previous revolutions. Language won’t be a barrier, as sophisticated translation software will allow dissidents from different languages to collaborate. Communication technologies will allow activists to engage from afar without risk. “Virtual courage” describes how global social media platforms will give potential activists and dissidents confidence that they have an audience, whether or not it is true. We will see “revolution tourists:” people who crawl the web for online protests to join and help amplify just for the thrill of it.
- Online vigilantism. We will see online mobs seeking individuals by sharing photos and descriptions of criminal or marginal behavior, just as some newspapers wrongly pointed the finger at innocent bystanders in a frenzied quest to be the first to identify the Boston Marathon bombers.
- A “digital caste system” where “people’s experience will be greatly determined by where they fall in the structure.” The tiny minority at the top will be largely insulated from the downside of technology by their wealth or location. The two billion already connected are the world’s middle class. The next five billion will receive the greatest benefits and the worst drawbacks.
The book will cause plenty of debate and that’s good. Consider the issue of intellectual property. The book discusses copyright and piracy as if the intellectual property laws of the physical world are completely sensible and automatically applicable to the new world. Rather than making the case for a complete revamp of our laws, which in my view is required, the authors seem to side with the corporations and governments in democratic countries that label our children pirates.
Schmidt and Cohen argue that privacy is important, but are deeply pessimistic that it can be defended. Among the reasons is that political hawks wait for serious public incidents, such as the Boston Marathon bombs, to ratchet up their demands for cyber oversight. This legitimizes activities such as data-mining, which combines our digital breadcrumbs, such as phone calls, Internet browsing history, Google searches, bank records, credit card purchases, and medical records to inspect and predict the behavior of every citizen.
They argue that the irresistible benefits of the virtual world are such that we voluntarily relinquish things we value in the physical world, like privacy, personal information and even security. Some might choose to live “off the cyber grid,” boycott the digital world, and live a quiet and simple life. Governments will soon view such behavior as suspicious, and will build registries of citizens who behave so oddly. Your non-cyber behavior will attract cyber scrutiny.
To be sure, we’re all giving volunteering more information than we have in the past and governments and corporations everywhere are motivated to collect and exploit as much data as they can. But there are workable policies and approaches individuals and institutions can take to defend this basic right. I wish the authors had talked to Ontario’s privacy commissioner Ann Cavoukian to learn about her Privacy by Design principles and program that is being adopted broadly to address this issue.
Privacy by Design argues that privacy cannot be assured solely by compliance with legislation and regulatory frameworks but is the responsibility of every organization to make it into its default modus operandi. The concept argues for a set of principles that can enable individuals to defend privacy and control over their personal information, help companies gaining a sustainable competitive advantage and ensure that governments don’t lose trust.
A book addressing foreign affairs seems incomplete to me without a chapter on global cooperation, problem solving and governance. It’s a perfect arena for the authors to develop their core thesis. The physical world has a set of global institutions that came out of the Bretton Woods agreements after the Second World War — the World Bank, the International Monetary Fund, the UN and others culminating in the G8 and G20. These institutions are increasingly ineffective. Contrast these to the new multi-stakeholder networks based on the Internet where tens of millions of people are cooperating to solve problems in new ways. But little is known about this new paradigm in global governance.
In another section, the book argues correctly that dictators, autocrats and oppressors should be worried. Connectivity provides unprecedented tools to scrutinize, take collective action and topple old regimes. But while there will be more revolutionary activity there will be fewer successful revolutions. The acceleration of the pace of revolution means that movements have a shorter gestation period to create the strategies, organizations and leaders that can not only bring down the old regime but to actually take power. The authors call them revolutionary false starts.
Rather than simply elaborating on this well-known trend, why not discuss how the emerging leaders could use the same social tools to build consensus, policies, and organizational capacity required to win elections, govern and forge democratic secular societies? There is a great discussion about how the Internet can help in reconstructing societies after disasters. How about a discussion about how it can help revolutionaries actually take power to build a better world?
The authors write that they are hopeful. “We believe the vast majority of the world will be net beneficiaries of connectivity, experiencing greater efficiency and opportunities and an improved quality of life.” They provide ample evidence that the arc of history is a positive one and towards freedom. “In the long run the presence of communications technologies will chip away at the most autocratic governments… it’s no coincidence that today’s autocracies are the least connected societies in the world.”
I’m hopeful too. But I must confess after reading this deeply disturbing book I’m struck anew by the enormity of the challenge to ensure that this smaller world our children inherit is a better one.
If you care about the future, and most of us do, read this book. It will give you resolve to take action and perhaps even help you figure out what is to be done.
A technology born of the Web and accelerated by mobile is now blossoming inside businesses. Ignored for years, application programming interfaces—a key layer of connectivity between disparate software—are undergoing a renaissance.
The trickiness of managing these connections, and their importance to the way businesses run their operations today, explains why we see vendors like Intel buying Mashery. Or CA Technologies snapping up Layer 7 Technologies. Or MuleSoft picking up ProgrammableWeb. Or, this morning, a startup contender, 3scale, raising $4 million from investors.
And that’s just in the last seven days.
Of course, this raises the question: what the heck is an application programming interface?
First, The APIs
In the simplest terms, an application programming interface, or API, is a set of requirements that enables one application to talk to another application.
On your desktop, an API is what lets some applications talk to others (like Word to Excel and vice versa), or access features of the operating system. Such APIs are familiar ground to any programmer who has built an application that needs to share features or data directly.
This is the API with which I am familiar: steady sets of code and requirements that lived on the operating system. But there’s a whole other class of APIs, built for Web services, that has kicked open the field of API management.
Web APIs are analogous to their older counterparts, but they serve as gateways to Web-based services, like Twitter or Facebook or Foursquare or Amazon. They are what enables developers to build applications to communicate directly with those services.
If you have a third-party app that connects to, say, Twitter, that app communicates with Twitter’s API to handle the actual connection. You, as the user, never see this API. As far as you are concerned, the whole thing is seamless. You post a tweet in the app and it shows up in the Twitter feed. But it’s the API that handles the job.
It is easy to think of APIs in this context as doors; they let data in and out of a Web service. But they are rarely indiscriminate doors. Like any door, they only swing in a certain way. And they are typically open for only the people who have keys to the lock. They have rules.
And rules have to managed.
Here Come The Managers
It turns out, explained Ed Anuff, a vice president at API management vendor Apigee, there are actually a lot of things that need to be managed about Web service APIs.
There’s the sign-up process for developers who express interest in using an API. There’s the documentation for the API, so they can write code that accesses it. There are credentials to be issued to both developers and users—these are all just part of the scope of information that has to be managed when a service releases an API for developers.
“All of that stuff is part of what an API management tool does,” Anuff said.
A critical function of API management tools is handing out the keys that let authorized developers unlock the door to a Web service’s data and functionality. Some APIs charge for access; API management tools handle billing. Sometimes there are limitations on access; those must be enforced.
API management began as a way for popular consumer Web services to open up to the creativity of independent developers. But what fueled the rise of API management as a cottage industry was enterprise IT managers who saw the success these household Web names were having with their APIs and who wanted to adapt the same model for their internal infrastructure.
“Lots more companies looked at these Web services and saw things they needed,” Anuff said. “Internal APIs didn’t have this self-service stuff.”
What really kicked the industry in the pants, however, was the tidal wave of mobile computing. Rather than building two separate versions of software for a desktop website and a mobile app, it’s far more efficient to build an API for the underlying service that holds user data and business logic, and then build desktop and Web versions of software that talk to that same API.
Add up all the different mobile platforms out there—iPhone, Android, Windows Phone, and so on—and an API rapidly becomes the only sensible architectural approach. Suddenly enterprise developers needed much better API management to handle all of the apps they wanted to build for their own employees on a variety of platforms.
It’s All About The Data
Anuff gave two big reasons why enterprises are seeking API management tools.
The first was operational. If a developer produced a poorly written app that made a burst of requests to an API, one right after the other, for instance, an API management tool would enable the IT staff to throttle the requests hitting the company’s Web service to something approaching a sane level until the app could be fixed.
The second example is very likely the reason why there’s been so much interest in this sector of late.
Recall that when an API is in use to connect to a service, then all of the data shared by the third-party app and the service passes through the API and, therefore, through the API management tool. This means that API management tools can be one-stop shops for rich and valuable data.
Larger vendors who want to keep their skin in the big-data game are going to be very interested in startups in the API management space. The analytics API management tools can provide for the requests they handle are a rich gold mine of information, and a new source of data is bound to attract attention.
Which explains a lot of the hubbub.
The Machine-to-Machine Future
Today, the APIs we think about most often – like Twitter’s and Facebook’s – typically handle requests generated by people clicking on a website or swiping and tapping on an app. But another, far more interesting potential for APIs lies in processing requests generated by machines – a market that could hit $18 billion in spending by 2014.
Think of the smart, Internet-connected energy meters being adding to homes. Or diagnostic sensors in your car that report back to the manufacturer when there are signs of an incipient engine failure. Or systems that detect atypical network traffic and reroute it on the fly to avoid slowdowns or outages. These all need defined rules for how one machine talks to another. And those rules are found in – you guessed it – APIs. APIs that need to be managed.
That’s the real growth market for APIs. And it suggests that what we’ve seen in the past week is only the first glimmer of a vein of gold that smart people will mine for decades to come.
Today, businesses are making enormous investments in Big Data and analytics to secure new customers, enhance existing client relationships and gain a competitive advantage. Worldwide, the trend is accelerating rapidly.
70% Of Companies Rely On Used Tech
What you may not know is that many organizations are turning certified refurbished IT equipment to make it all happen. Refurbished equipment is a cost-efficient and often environmentally beneficial solution to meeting the growing demand for mined data. In fact, according to research firm IDC, up to 70% of companies have purchased used, reconditioned equipment in the past two years.
The facts are clear. Refurbished machines extend the life of older IT equipment that otherwise would require disposal, they serve as an affordable alternative to new equipment, and they can help a company improve its business case to acquire much-needed analytics solutions that will help them make better decisions and grow.
From a quality standpoint, refurbished machines are reconditioned, tested and certified for resale using rigorous processes and original manufacturing standards. They can even be rebuilt to meet specific client needs, such as a new analytics application that transforms information into insight and helps a company improve its marketing outreach. Maintenance contracts and guarantees can also be attached, to provide even greater assurances.
3 Ways Refurbished Equipment Goes Green
In addition to all this, refurbished IT equipment works seamlessly with new technologies to help organizations:
- Support innovation, grow and transform
- Reduce total cost of ownership so capital can be used for other needs
- Meet business and IT requirements despite financial or credit limitations
The combination of high quality and low price can also make pre-owned equipment a good solution for special projects, temporary capacity and unexpected changes, for example when a physical move doesn’t require the latest technology. Pre-owned technology can also help businesses maintain a legacy environment or facilitate a disaster recovery solution.
In the end, the reuse of IT equipment is not only a cost-effective approach to expanding/upgrading IT infrastructures, it is an environmentally responsible course to take.
To be sure, a wide variety of used equipment is available, including personal computers, servers, storage, printers and networking devices. IBM uses refurbished equipment to meet certain IT needs within its own infrastructure.
On this Earth Day 2013, we recognize the positive impact of refurbished IT equipment on the environment, as well as a company’s bottom line.
Turns out that social networking was the most popular online activity in 2012, soundly trouncing email, news and shopping (among other activities) when it comes to time consumption.
Experian Marketing Services has just published a report breaking down time consumption rates for the most common online activites across personal computers and mobile devices in the United States, the United Kingdom, and Australia.
If broken down into an hour, analysts found social networking would have accounted for 27 percent of online activity on PCs during that time frame last year.
For the U.S. alone, the number is actually closer to 16 minutes out of every hour online for social networking and forums, followed by nine minutes on entertainment sites and five minutes dedicated to shopping.
Email and business trailed those more consumer-friendly activities at three minutes a piece.
While this news might be slightly troubling for employers, Bill Tancer, general manager of global research for Experian Marketing Services, described in the report about the opportunity (and challenge) this presents for digital marketers.
Understanding consumer behavior across channels is more important than ever as more visits are being made on the move, particularly among social networking and email. With smart phones and tablets becoming more powerful, our data clearly indicates the difference between mobile and traditional desktop usage further enabling the ’always on’ consumer mentality. Marketers need to understand these differences, as well as regionally, to ensure campaigns can be tailored for better and more effective engagement.
However, there are a few important items to point out. First, analysts found that time spent on social media in these three markets still declined by single digit percentage points from the previous year.
More importantly, the situation is very different for mobile.
Just for the first quarter of 2013, email accounted for 23 percent of time spent on mobile devices in the United States, while social networking only clocked in 15 percent.
Google Search :)
- May 2013
- April 2013
- March 2013
- February 2013
- January 2013
- December 2012
- November 2012
- October 2012
- September 2012
- August 2012
- July 2012
- June 2012
- May 2012
- April 2012
- March 2012
- February 2012
- January 2012
- December 2011
- November 2011
- October 2011
- September 2011
- August 2011
- July 2011
- June 2011
- May 2011
- April 2011
- March 2011
- February 2011
- January 2011
- December 2010
- November 2010
- October 2010
- September 2010
- August 2010
- July 2010
- June 2010
- May 2010
- April 2010
- March 2010
- Dropbox vs. Google Drive vs. Amazon vs. Skydrive: Which One Is Fastest ?
- Google And SAP: Two Very Different Cloud Strategies
- BlackBerry to offer BBM as standalone app for iOS and Android this summer
- Open Source Is Better Than the Closed Stuff (Until You Hit 1 Million Lines)
- Where In The World Is Your Next Data Center ?
- 10 Kiat untuk Programmer yang Mau Jadi Entrepreneur
- Review: The New Digital Age: Reshaping the Future of People, Nations and Business
- The New API Gold Rush
- Refurbished IT: A Cost-Efficient, Green Approach To Big Data
- Study: Social trumped email, news in time spent online in 2012