Apr
17

Google Launches Chrome Remote Desktop On Android, Allowing Mobile Access To Your PC

Author admin    Category Android, IT News     Tags

Google this morning launched a mobile client application called “Chrome Remote Desktop app for Android” (whew!) which allows for remote access to your Mac or PC from your Android device, whether smartphone or tablet. The new app is an extension of Google’s previously launched Chrome Remote Desktop screen-sharing service, which allows you to share your desktop’s screen with other Chrome browser or Chromebook users.

As with its big-screen counterpart, to use the Android application you first have to install a helper application on your desktop or laptop computer. That app is here in the Chrome Web Store and works on Windows (XP and above), Mac (OS X 10.6 and above) and Linux computers. The helper app installs as an extension to Google Chrome or the Chrome-based OS that powers Google’s Chromebooks.

Once installed, however, you’ll be able to open the app and connect to any of your computers with just a tap, manage them, and navigate through their files and folders from afar — like a modern version of GoToMyPC, for example.

We’ve known an Android client was in the works for some time, as there was even a functional version of the Android client available back in January, though it required that you compile the app from source in order to use it. An iOS version is also in the works, but its development is said to be further behind.

The move comes at a time when competitor Amazon is targeting enterprise users with its own version of remote access software, Amazon Workspaces. Officially launched to the public in March, this service similarly lets company employees access their work computers from any device, including Mac, PC, iPhone, iPad, Android or Kindle Fire HDX tablets. Of course, in Amazon’s case, the goal is to make its tablets appear more business-friendly.

Google’s Remote Desktop, on the other hand, has a more consumer-focused vibe, which even had the company once touting the service as a way to be the family hero by “adjusting printer settings on your mom’s computer to finding a lost file on your dad’s laptop,” for example.

The official Chrome Remote Desktop Android app is available here on Google Play.

[techcrunch]

Apr
14

SQL Server 2014 Key Features and Enhancements

Author admin    Category IT News, SQL     Tags

sql-server-2014

SQL Server 2014 comes with a set of enhancements as compared to its processors that brings a lot of business benefits to the applications in OLTP and OLAP environments. This article highlights a few key enhancements that are built into the product.

Performance Enhancements

These are performance enhancements that can help your workload run faster.

  • In-Memory OLTP:  This is a new feature allows the database to handle in-memory operations in OLTP scenarios (resolve issues in high concurrency situations). This component is called as “Hekaton”.  Microsoft has released an in-memory component called ‘xVelocity’ catering to OALP requirements, along with SQL Server 2012.
  • On-line indexing at the partition level: This feature allows index rebuilding done at a partition level. Also, the index statistics can be managed at the partition level which will be a huge improvement in performance
  • Updatable Column Store Indexes (CSI):  The CSI feature has been introduced in 2012. The limitation with CSI was that the table cannot be modified once a CSI is created.  In order to update table information the index needed to be dropped or disabled and then rebuilt the index. This new feature provides the ability to load or delete data from the table with Column Store indexes.
  • Buffer Pool Extension to Solid State Drives (SSDs). Ability of each node to have its own SSD or SSD Array for buffering (just like you would with TempDB) and thus increase in the performance by means of faster paging.  In this way, one can cache frequently used data on SSDs. This feature can be best leveraged in case of read-heavy OLTP workloads.
  • Resource Management. Resource Governor can control  I/O along with CPU and Memory (provided by the previous versions).

 

Improved Scalability

SQL Server has imcreased the amount of hardware it can use.

  • Ability to scale up to 640 logical processors and 4TB of memory in a physical environment
  • Ability to scale to 64 virtual processors and 1TB of memory when running in a virtual machine (VM).

High-Availability Enhancements

  • AlwaysOn Availability Groups (AG) get more secondary nodes: Always On supports now up to 8 secondary nodes instead of 4 (is the case with SQL 2012). Of course, Enterprise Edition is needed.
  • AlwaysOn Availability Group’s Readable Secondary will be ON-LINE (more reliable). In SQL 2012, if the primary drops offline, the readable replica databases drop offline. In SQL 2014, the secondary remain online and readable when the primaries aren’t available.
  • Azure Integrated AlwaysOn Availability Groups:  Uses Azure VMs as AlwaysOn AG replicas. This replicas can be created asynchronously on cloud (Azure platform) that saves from paying for expensive offsite datacenter space with machines that sit idle all the time.

Backup Enhancements

  • Smart Backup to Azure (Windows Azure Integrated Backup): Another new backup feature is Smart Backups. With Smart Backups SQL Server determines whether a full or incremental backup is needed and backs up accordingly to Azure.
  •  Azure backup feature and the Azure AlwaysOn Availability options are completely integrated into SSMS.

Microsoft® in-memory database engine

Microsoft® implemented an in-memory transactional engine with the project code name Hekaton*”. Hekaton is expected to dramatically improve the throughput and latency of SQL Server’s on-line transaction processing (OLTP) capabilities. Hekaton is designed to meet the requirements of the most demanding OLTP applications for financial services companies, online gaming and other companies which have extremely demanding TP requirements. This product achieves breakthrough improvement in TP capabilities without requiring a separate data management product or a new programming model. It’s still SQL Server!

Note*: Hekaton is from the Greek word ?κατ?ν for “hundred”. The design goal for the Hekaton original proof of concept prototype was to achieve 100x speeds (possibly)  for certain TP operations.

Key Features:

  • Implements a row-based technology squarely focused on transaction processing (TP) workloads. However, the xVelocity* and Hekaton in-memory approaches are NOT mutually exclusive. The combination of Hekaton and SQL Server’s existing xVelocity column store index and xVelocity analytics engine, will result in a great combination.
  • Hekaton (In-memory TP engine) and xVelocity column store index will be built-in to SQL Server, rather than a separate data engine, which is a conscious design choice

Note*: xVelocity is OLAP version of in-memory database released along with SQL Server 2012.

Technology Implementation:

  • Hekaton works by providing in-application memory storage for the most often used tables in SQL Server. Identifies tables that are most accessed, and will store them in the system’s main memory for faster access time.
  • Hekaton compiles T-SQL stored procedures directly into native code for faster execution.
  • Hekaton uses a new concurrency control mechanism developed by Microsoft® team and researchers from the University of Wisconsin using lock-free data structures for better scalability across multiple cores, avoiding locks while preserving ACID transaction integrity.

These features may be available already outside SQL Server such as own Power Pivot and Power View. However, the biggest difference is that Hekaton is built directly into SQL Server, so that there are no extensions, downloads, or interfaces that can slow down the very program meant to help increase your speed.

A few challenges with the implementation of in-memory OLTP database are as follows.

  • Need to change your data model. Need to bring significant changes to the traditional OLTP model. For ex: Identity fields aren’t supported, may have to use a GUID as a primary clustered key.
  • Need to change application code to replace ad-hoc SQL queries with stored procedure calls. Hekaton works best with stored procedures, as the stored procedures can compile into native code.
  • Since the processing happens in in-memory only, if there is any sudden growth in the Hekaton tables, we can cache less of your other tables. We may run out of memory.

[sqlservercentral]

Apr
2

How WhatsApp Grew to Nearly 500 Million Users, 11,000 cores, and 70 Million Messages a Second

Author admin    Category IT News     Tags

When we last visited WhatsApp they’d just been acquired by Facebook for $19 billion. We learned about their early architecture, which centered around a maniacal focus on optimizing Erlang into handling 2 million connections a server, working on All The Phones, and making users happy through simplicity.

Two years later traffic has grown 10x. How did WhatsApp make that jump to the next level of scalability?

Rick Reed tells us in a talk he gave at the Erlang Factory: That’s ‘Billion’ with a ‘B’: Scaling to the next level at WhatsApp (slides), which revealed some eye popping WhatsApp stats:

What has hundreds of nodes, thousands of cores, hundreds of terabytes of RAM, and hopes to serve the billions of smartphones that will soon be a reality around the globe? The Erlang/FreeBSD-based server infrastructure at WhatsApp. We’ve faced many challenges in meeting the ever-growing demand for our messaging services, but as we continue to push the envelope on size (>8000 cores) and speed (>70M Erlang messages per second) of our serving system.

What are some of the most notable changes from two years ago?

  • Obviously much bigger in every dimension, except the number of engineers. More boxes, more datacenters, more memory, more users, and more scale problems. Handling this level of growth with so few engineers is what Rick is most proud of: 40 million users per engineer. This is part of the win of the cloud. Their engineers work on their software. The network, hardware, and datacenter is handled by someone else.

  • They’ve gone away from trying to support as many connections per box as possible because of the need to have enough head room to handle the overall increased load on each box. Though their general strategy of keeping down management overhead by getting really big boxes and running efficiently on SMP machines, remains the same.

  • Transience is nice. With multimedia, pictures, text, voice, video all being part of their architecture now, not having to store all these assets for the long term simplifies the system greatly. The architecture can revolve around throughput, caching, and partitioning.

  • Erlang is its own world. Listening to the talk it became clear how much of everything you do is in the world view of Erlang, which can be quite disorienting. Though in the end it’s a distributed system and all the issues are the same as in any other distributed system.

  • Mnesia, the Erlang database, seemed to be a big source of problem at their scale. It made me wonder if some other database might be more appropriate and if the need to stay within the Erlang family of solutions can be a bit blinding?

  • Lots of problems related to scale as you might imagine. Problems with flapping connections, queues getting so long they delay high priority operations, flapping of timers, code that worked just fine at one traffic level breaking badly at higher traffic levels, high priority messages not getting serviced under high load, operations blocking other operations in unexpected ways, failures causing resources issues, and so on. These things just happen and have to be worked through no matter what system you are using.

  • I remain stunned and amazed at Rick’s ability to track down and fix problems. Quite impressive.

Rick always gives a good talk. He’s very generous with specific details that obviously derive directly from issues experienced in production. Here’s my gloss on his talk…

Stats

  • 465M monthly users.

  • 19B messages in & 40B out per day

  • 600M pics, 200M voice, 100M videos

  • 147M peak concurrent connections – phones connected to the systems

  • 230K peak logins/sec – phones connecting and disconnecting

  • 342K peak msgs in/sec, 712K out

  • ~10 team member works on Erlang and they handle both development and ops.

  • Holidays highest usage for multimedia.

    • 146Gb/s out (Christmas Eve), quite a bit of bandwidth going out to phones

    • 360M videos downloaded (Christmas Even)

    • 2B pics downloaded (46k/s) (New Years Eve)

    • 1 pic downloaded 32M times (New Years Eve)

Stack

  • Erlang R16B01 (plus their own patches)

  • FreeBSD 9.2

  • Mnesia (database)

  • Yaws

  • SoftLayer is their cloud provider, bare metal machines, fairly isolated within the network, dual datacenter configuration

Hardware

  • ~ 550 servers + standby gear

    • ~150 chat servers (~1M phones each, 150 million peak connections)

    • ~250 mms (multimedia) servers

    • 2x2690v2 Ivy Bridge 10-core (40 threads total with hyperthreading)

    • Database nodes have the 512GB of RAM

    • Standard compute nodes have 64GB of RAM

    • SSD primarily for reliability, except when storing video because more storage is required

    • Dual-link GigE x2 (public which is user facing & private which faces the backend systems)

  • > 11,000 cores run the Erlang system

System Overview

  • Erlang love.

    • Great language to support so many users with so few engineers.

    • Great SMP scalability. Can run very large boxes and keep the node count low. Operational complexity scales with number of nodes, not the number of cores.

    • Can update code on the fly.

  • Scalability is like clearing a minefield. They are generally able to detect and clear problems before they explode. Events which test the system are world events, especially soccer, which creates big vertical load spikes. Server failures, usually RAM. Network failures. And bad software pushes.

  • Conventional looking architecture:

    • Phones (clients) connect to chat and MMS (multimedia).

    • Chat connects to transient offline storage. Backend systems hold on to messages while they are in transit between users.

    • Chat connects to databases like Account, Profile, Push, Group, …

  • Messages to phones:

    • Actual text messages

    • Notifications: group subjects, profile photo changes, etc

    • Presence messages: typing, idle, connected/not connected, etc

  • Multimedia database:

    • In-memory Mnesia database using about 2TB of RAM sharded across 16 partitions to store about 18 billion records.

    • Messages and multimedia are only stored while they are being delivered, but while the media is being delivered information about the media is stored in the database.

  • Run at 1 million connections per server instead of the two million connections per server they did two years ago, generally because the servers are a lot busier:

    • With more users they want to run with more head room on each server to soak up peak loads.

    • Users are more active than they were two years ago. They are sending more messages so the servers are doing more.

    • Functionality that used to be outside these servers was moved to run on them so they are doing more.

Decoupling

  • Isolate bottlenecks so they don’t spread through the system

    • Tight coupling causes cascading failures.

    • Backend systems that are deeper in the stack shouldn’t bubble up to the front-end.

    • Everything is partitioned so that if one partition is in trouble the other partitions won’t be impacted.

    • Keep as much throughput going as you can while problems are being addressed.

  • Asynchronicity to minimize impact of latency on throughput

    • Allows keeping throughput as high as possible even when there is latency at various points in the system or when latency is unpredictable.

    • Reduces coupling and allows systems to work as fast as they can.

  • Avoid head-of-line blocking

    • Head-of-line block is where processing on the first packet in a queue starves all those items queued behind it.

    • Separate read and write queues. Especially where performing transactions on tables so if there’s any latency on the write side it doesn’t block the read side. Generally the read side is going much faster so any blocking will pile up readers.

    • Separate inter-node queues. If a node or network connecting nodes runs into trouble, it can block work in an application. So when sending to different nodes the messages are given to different procs (lightweight concurrency in Erlang) so only messages destined for a problem node are backed up. This allows messages to healthy nodes to flow freely. The problem is isolated to where the problem is. Patched mnesia to do this well at async_dirty replication time. The app sending the message is decoupled from the sending and does not feel any back pressure if there’s a problem with a node.

    • When working with non-deterministic latency a “queuer” FIFO worker dispatch model is used.

  • Meta-clustering

    • Note, this section is at about 29 minutes into the talk and is covered very briefly, unfortunately.

    • Needed a way to contain the size of a single cluster and also allow it to span long distances.

    • Built wandist, a dist-like transport over gen_tcp, that consists of a mesh of the nodes that need to talk to each other.

    • A transparent routing layer above pg2 creates a single hop routing dispatch system.

    • Example: two main clusters in two datacenters, two multimedia clusters in two different datacenters, and a shared global cluster between the two datacenters. They all have wandist connections between them.

  • Examples:

    • Avoid mnesia transaction coupling by using async_dirty. Most of the transactions are not used.

    • Use calls only when getting back from the database, otherwise casting everything to preserve asynchronous mode of operation. In Erlang, handle_call blocks for a response and messages are queued up, handle_cast doesn’t block because the result of the operation isn’t of interest.

    • Calls use timeouts, not monitors. Reduces contention on procs on the far end and reduces traffic over the distribution channel.

    • When only need best effort delivery use nosuspend on casts. This isolates a node from downstream problems if either a node has problems or the network has problems, in which case distribution buffers back up on the sending node and procs that try to send start to get suspended by the scheduler which causes a cascading failure where everyone is waiting and no work is getting done.

    • Use large distribution buffers to absorb problems in the network and on downstream nodes.

Parallelize

  • Work distribution:

    • Need to distribute work over 11,000 cores.

    • Start with a single threaded gen_server. Then created a gen_factory to spread the work across multiple workers.

    • At a certain load the dispatch process itself became a bottleneck and not just because of the execute time. There’s a high fan-in with a lot of nodes feeding into the dispatch process for a box, the locks on the process become a bottleneck with the distribution ports coming in and the process itself.

    • So created a gen_industry, a layer above gen_factory, so that there are multiple dispatch procs which allows for the parallelization of all the input coming into the box as well as the dispatch to the workers themselves.

    • Workers are selected by a key for database operations. For the cases where there’s non-deterministic latency, like for IO, a FIFO model is used to distribute work to prevent head-of-line blocking problems.

  • Partition services:

    • Partition between 2 and 32 ways. Most services are partitioned 32 ways.

    • pg2 addressing, which are distributed process groups, is used to address partitions across the cluster.

    • Nodes are run in pairs. One is primary and the other secondary. If one or the other goes down they they will be handling primary and secondary traffic.

    • Generally try to limit the number of procs that access a single ets (built-in term storage) or single mnesia fragment to 8. This keeps the lock contention under control.

  • Mnesia:

    • Because they don’t use transactions to get as much consistency as possible they serialize access to records on a single process on a single node by hashing. Hash to a partition, which maps to a mnesia fragment, and ends up being dispatched into one factory, one worker. So all access to a single record goes to a single erlang process.

    • Each mnesia fragment is only being written to or read from at the application level on one node, which allows replication streams that only go in one direction.

    • When there’s a replication stream going between peers there’s a bottleneck in how fast the fragments can be updated. They patched OTP to have multiple transaction managers running for async_dirty only so record updates happen in parallel which give a lot more replication throughput.

    • Patch to allow the mnesia library directory to be split over multiple libraries, which means it could be written to multiple drives, which increases throughput to the disk. Real issue is when mnesia loads from a peer. Spreading IO over multiple drives, even SSDs, gives a lot more scalability in terms how fast the database is loaded.

    • Shrinking mnesia islands to two nodes per island. An island is an mnesia cluster. Even with 32 partitions there will be 16 islands that support a table. Gives better opportunity to support schema operations under load because there’s only two nodes that have to complete the schema operation. Reduces load time coordination if trying to bring one or both nodes up at the same time.

    • Deal with network partitions in mnesia by alerting quickly. They continue running. And then have a manual reconciliation process to merge them together.

Optimize

  • The offline storage system used to be a big bottleneck under load spikes. Just couldn’t push stuff to the file system fast enough.

    • Most messages are read quickly by users, like 50% within 60 seconds.

    • Added a write-back cache so messages could be delivered before they had to be written to the filesystem. 98% cache hit rate.

    • If the IO system is backed up because of load, the cache gives extra buffering to deliver messages at full rate while the IO system works to catch up.

  • Fixed head-of-line blocking in async file IO by patching BEAM (the Erlang VM) to round-robin file port requests across all async worker threads which smoothed writes in the cases where there was a large mailbox or a slow disk.

  • Keep large mailboxes out of the cache. Some people are in a large number of groups and get thousands of messages per hour. They polluted the cache and slowed things down. Evict them from the cache. Note, dealing with disproportionately large users is a problem for every system, including Twitter.

  • Slow access to mnesia table with lots of fragments

    • The account table as 512 fragments which are partitioned into the islands, which means there’s a sparse mapping of users to these 512 fragments. Most of the fragments will be empty and idle.

    • Doubling the number of hosts caused the throughput to go down. It turned out record access was really slow because hash chain sizes were over 2K when the target is 7.

    • What was happening is the hashing scheme caused a large number of empty buckets to be created and few that were extremely long. A two line change improved performance by 4 to 1.

Patching

  • Ran into contention on the timer wheel. With a few million connections into a single host and each of those is setting and resetting a timer whenever something happens with a particular phone, the results is hundreds of thousands of timer sets and resets per second. With one timer wheel and one lock it was a significant source of contention. Solution was to create multiple timer wheels to eliminate the contention.

  • mnesia_tm is a big select loop and under load when trying to load tables the backlog could get to a point of no return because of the selective receive. Patch to pull stuff out of the incoming transactions stream and save it to process later.

  • Add multiple mnesia_tm async_dirty senders.

  • Add mark/set for prim_file commands.

  • Some clusters span a continent, so mnesia should load from a nearby node rather than across the country.

  • Add round-robin scheduling for async file IO.

  • Seed ets hash to break coincidence w/ phash2.

  • Optimize ets main/name tables for scale.

  • Don’t queue mnesia dump if already dumping. Can’t complete a schema operation with dumps pending so if a lot of dumps are queued it wasn’t possible to do schema ops.

The 2/22 Outage

  • Even with all this work stuff happens. And it happened at the worst time, a 210 minute outage occurred right after the Facebook acquisition.

  • Did not happen because of load. It began with a back-end router problem.

  • The router dropped a VLAN which caused a massive node disconnect/reconnect throughout the cluster. When everything reconnected it was in an unstable state they had never seen before.

  • Finally decided they had to stop everything and bring it back up, which they hadn’t done in years. It took a while to bring everything down and bring it back up.

  • In the process they found an overly-coupled subsystem. With disconnects and reconnects pg2 can get in a state where it’s doing n^3 messaging. They saw pg2 message queues going from zero to 4 million within seconds. Rolling out a patch.

Release

  • Can’t simulate traffic at this scale, especially with huge spikes, like News Years Eve at midnight. If they are trying something out that is extremely disruptive it is rolled out very slowly. It will only take a small piece of the traffic. Then quickly iterate until it works well, then roll out to the rest of the cluster.

  • Rollout is a rolling upgrade. Everything is redundant. If they want to do a BEAM upgrade it is installed and then gradually execute restarts across the cluster to pick up the new changes. Sometimes of it’s just a hot patch it can be rolled out without a complete restart. This is rare. Usually upgrade the whole thing.

Remaining Challenges

  • Databases get reloaded on a fairly regular basis due to upgrades. The problem with so much data it takes a long time to load and that the loads fail for various reasons at larger scales.

  • Real-time cluster status & control at scale. The old shell window approach won’t work anymore.

  • Power-of-2 partitioning. At 32 partitions now. The next step is 64, which will work, but 128 partitions will be impractical. Not much discussion on this point.

[highscalability]

Mar
26

Hacking ATM Machines with Just a Text Message

Author admin    Category IT News, Security     Tags
Hacking ATMs with just text message

As we reported earlier, Microsoft will stop supporting the Windows XP operating system after 8th April, apparently 95% of the world’s 3 million ATM machines are run on it. Microsoft’s decision to withdraw support for Windows XP poses critical security threat to the economic infrastructure worldwide.

MORE REASONS TO UPGRADE
Security researchers at Antivirus firm Symantec claimed that hackers can exploit a weakness in Windows XP based ATMs, that allow them to withdraw cash simply by sending an SMS to compromised ATMs.
What was interesting about this variant of Ploutus was that it allowed cybercriminals to simply send an SMS to the compromised ATM, then walk up and collect the dispensed cash. It may seem incredible, but this technique is being used in a number of places across the world at this time.” researchers said.

HARDWIRED Malware for ATMs

According to researchers – In 2013, they detected a malware named Backdoor.Ploutus, installed on ATMs in Mexico, which is designed to rob a certain type of standalone ATM with just the text messages.
To install the malware into ATMs machines, hacker must connect the ATM to a mobile phone via USB tethering and then to initiate a shared Internet connection, which then can be used to send specific SMS commands to the phone attached or hardwired inside the ATM.
Since the phone is connected to the ATM through the USB port, the phone also draws power from the connection, which charges the phone battery. As a result, the phone will remain powered up indefinitely.

HOW-TO HACK ATMs

  • Connect a mobile phone to the machine with a USB cable and install Ploutus Malware.
  • The attacker sends two SMS messages to the mobile phone inside the ATM.
    • SMS 1 contains a valid activation ID to activate the malware
    • SMS 2 contains a valid dispense command to get the money out
  • Mobile attached inside the ATM detects valid incoming SMS messages and forwards them to the ATM as a TCP or UDP packet.
  • Network packet monitor (NPM) module coded in the malware receives the TCP/UDP packet and if it contains a valid command, it will execute Ploutus
  • Amount for Cash withdrawal is pre-configured inside the malware
  • Finally, the hacker can collect cash from the hacked ATM machine.
Researchers have detected few more advanced variants of this malware, some attempts to steal customer card and PIN data, while others attempt man-in-the-middle attacks.
This malware is now spreading to other countries, so you are recommended to pay extra attention and remain cautious while using an ATM.
[thehackernews]
Mar
25

Global Impact Competition 2014 : Kompetisi ide inovasi teknologi ini menawarkan hadiah program studi di Amerika Serikat senilai US$ 30.000 secara gratis

Author admin    Category IT News     Tags

Memiliki ide atau konsep brilian dalam bidang teknologi yang bermanfaat bagi kehidupan masyarakat, sebaiknya jangan dipendam. Ada baiknya, saat ini Anda mencoba mengaplikasikannya lewat ajang Global Impact Competition 2014. Kompetisi ide inovasi teknologi ini menawarkan hadiah program studi di Amerika Serikat senilai US$ 30.000 secara gratis.

Digagas oleh Singularity University (SU), kompetisi ini merupakan bagian dari program tahunan SU yang bertajuk Graduate Studie Program, dimana program ini mengajak banyak anak muda berbakat di seluruh dunia untuk mengikuti program studi SU yang diadakan di NASA Research Park, Silicon Valley Amerika Serikat. Dalam program studi ini, Indonesia diberikan satu bangku eksklusif yang dapat diraih oleh siapa saja lewat program seleksi Indonesia GIC 2014.

Program seleksi ini berisi kompetisi ide inovasi teknologi yang berguna bagi kehidupan masyarakat luas. Sebagai info, ajang Indonesia GIC juga telah diselenggarakan di tahun 2012 lalu dengan dimenangkan oleh Fransiska Hadiwidjana yang berhasil membentuk startup Augmented Medical Intelligence, Inc. Fransiska menuturkan lewat testimoni yang dimuat dalam berita pers, lewat ajang ini dan berhasil mengikuti Graduate Studie Program SU, startup bentukannya tersebut berhasil diakui oleh dunia internasional dan bahkan berhasil masuk ke dalam salah satu top companies di Startup Chile Round 6.

Di tahun ini, Indonesia GIC 2014 mengusung tema perubahan iklim, dimana kontestan diminta untuk mengajukan ide inovatif di bidang iklim, lingkungan dan energi yang akan mengurangi efek rumah kaca secara signifikan, penggundulan hutan, penggunaan bahan bakar fosil, atau memfasilitasi adaptasi terhadap perubahan iklim di berbagai sektor. Tentunya, seluruh ide yang masuk wajib diimplementasikan dalam bidang teknologi.

Seperti yang sudah disampaikan di awal, pemenang kompetisi ini akan mendapatkan beasiswa sebesar US$30,000 (full tuition fee) secara gratis untuk menghadiri SU Graduate Studies Program 2014. Jika sesuai jadwal, program ini akan berlangsung sepanjang musim panas tahun ini yaitu pada bulan Juni hingga Agustus mendatang.

Jika tertarik untuk mengikutinya, Anda bisa mengunjungi halaman resmi Indonesia GIC 2014. Untuk pendaftaran ditutup pada tanggal 14 April 2014 mendatang. Masih ada cukup waktu untuk merancang ide teknologi terbaik demi memberikan solusi bagi isu perubahan iklim di dunia.

[dailysocial]

Mar
20

Android: Fast Communication with .NET Using Protocol Buffers

Author admin    Category .NET, Android, IT News, Programming, Tips & Trik     Tags

Introduction

One of challenges in the interprocess communication when communicating across platforms (e.g. between Android and .NET) is how to encode (serialize) messages so that they can be understood by both platforms. Default binary serializers provided by platforms are not compatible and so the common solution is to encode messages into some text format e.g. XML or JSON. This can be perfectly ok in many cases but for applications with high performance expectations using the text format may not be an optimal solution.

The example bellow demonstrates how to use Protocol Buffers binary serialization in the inter-process communication between Android and .NET applications.

You Need to Download

In order to build the example source code you will need to add references to related libraries into the project. To get these libraries you can download:

  • Eneter.ProtoBuf.Serializer – protocol buffer serializer for Eneter, it also contains compiled protocol buffer libraries and utility applications for ‘proto’ files.
  • Eneter.Messaging.Framework – communication framework that can be used for free for non-commercial use.

Protocol Buffers libraries are open source projects that can be found at:

  • protobuf – Google implementation of Protocol Buffers for Java, C++ and Python.
  • protobuf-net – Protocol Buffers implementation from Marc Gravell for .NET platforms.
  • Eneter.ProtoBuf.Serializer – Open source project to integrate Protocol Buffers and Eneter Messaging Framework.

Add Following References into your Project

Into .NET project:

  • protobuf-net.dll – protocol buffers serializer for .NET, Windows Phone, Silverlight and Compact Framework developed by Marc Gravell.
  • Eneter.ProtoBuf.Serializer.dll – implements serializer for Eneter Messaging Framework using protobuf-net.dll.
  • Eneter.Messaging.Framework.dll – lightweight cross-platform framework for inter-process communication.

Into Android project:

  • protobuf.jar – protocol buffers serializer for Java and Android developed by Google.
  • eneter-protobuf-serializer.jar – implements serializer for Eneter Messaging Framework using protobuf.jar from Google.
  • eneter-messaging.jar – lightweight cross-platform framework for inter-process communication.

Important: please follow this procedure (for Eclipse) to add libraries into the Android project:
(To add a library into the project you need to import it instead of adding it via project properties.
Also ensure Java compliance level is set to 6.0. Properties -> Java Compiler -> JDK Compliance -> 1.6.)

  1. Create a new folder ‘libs’ in your project. (use exactly name libs)
  2. Right click on ‘libs’ and choose ‘Import…’ -> ‘General/File System’ -> ‘Next’.
  3. Then click ‘Browser’ button for ‘From directory’ and navigate to directory with libraries you want to add.
  4. Select check boxes for libraries you want to add.
  5. Press ‘Finish’

Protocol Buffers

Protocol Buffers is a binary serialization originally developed by Google to share data among applications developed in different languages like Java, C++ and Python. It became the open source and was ported to other languages and platforms too.

The biggest advantage of Protocol Buffers is its performance and availability on multiple platforms what makes it an alternative to consider when designing the communication between applications.
If you are interested a simple performance measurement is available at https://code.google.com/p/eneter-protobuf-serializer/wiki/PerformanceMeasurements.

Working With Protocol Buffers

The following procedure is optimized for defining messages for cross-platform communication:
(If you want to use Protocol Buffers only in .NET you do not have to declare messages via the ‘proto’ file but you can declare them directly in the source code by attributing classes – same way as using DataContractSerializer.)

  1. Declare messages in the ‘proto’ file.
  2. Compile the ‘proto’ file into the source code (C# and Java). It transforms declared messages to classes containing specified fields and the serialization functionality.
  3. Include generated source files into C# and Java projects.
  4. Initialize Eneter communication components to use EneterProtoBufSerializer.

640249/UsingProtoBuf.png

Example Code

The example bellow is exactly the same as in my previous article Android: How to communicate with .NET application via TCP. The only difference is the code in this article uses EneterProtoBufSerializer instead of XmlStringSerializer.

Please refer to Android: How to communicate with .NET application via TCP if you need details about how to use TCP on Android and how to setup the IP address in the emulator.

640249/CommunicationBetweenAndroidandNETProtoBuf.png

proto File

The ‘proto’ file represents a contract describing messages that shall be used for the interaction. Messages are declared in the platform neutral protocol buffer language – for the syntax details you can refer to https://developers.google.com/protocol-buffers/docs/proto.

Messages in our example are declared in the file MessageDeclarations.proto:

Collapse | Copy Code
// Request Message
message MyRequest
{
    required string Text = 1;
}

// Response Message
message MyResponse
{
    required int32 Length = 1;
}

The ‘proto’ file is then compiled to C# and Java source code. Declared messages are transformed to classes containing declared fields and serialization functionality.

The following commands were used in our example to compile the ‘proto’ file:

Collapse | Copy Code
protogen.exe -i:MessageDeclarations.proto -o:MessageDeclarations.cs
protoc.exe -I=./ --java_out=./ ./MessageDeclarations.proto

Android Client Application

The Android client is a very simple application allowing user to put some text message and send the request to the service to get back the length of the text.
When the response message is received it must be marshaled to the UI thread to display the result.

The client uses EneterProtoBufSerializer. It instantiates the serializer in the openConnection() method and puts its reference to the DuplexTypedMessagesFactory ensuring so the message sender will use Protocol Buffers.

The whole implementation is very simple:

Collapse | Copy Code
package net.client;

import message.declarations.MessageDeclarations.*;
import eneter.messaging.dataprocessing.serializing.ISerializer;
import eneter.messaging.diagnostic.EneterTrace;
import eneter.messaging.endpoints.typedmessages.*;
import eneter.messaging.messagingsystems.messagingsystembase.*;
import eneter.messaging.messagingsystems.tcpmessagingsystem.TcpMessagingSystemFactory;
import eneter.net.system.EventHandler;
import eneter.protobuf.ProtoBufSerializer;
import android.app.Activity;
import android.os.Bundle;
import android.os.Handler;
import android.view.View;
import android.view.View.OnClickListener;
import android.widget.*;

public class AndroidNetCommunicationClientActivity extends Activity
{
    // UI controls
    private Handler myRefresh = new Handler();
    private EditText myMessageTextEditText;
    private EditText myResponseEditText;
    private Button mySendRequestBtn;

    // Sender sending MyRequest and as a response receiving MyResponse.
    private IDuplexTypedMessageSender<MyResponse, MyRequest> mySender;

    /** Called when the activity is first created. */
    @Override
    public void onCreate(Bundle savedInstanceState)
    {
        super.onCreate(savedInstanceState);
        setContentView(R.layout.main);

        // Get UI widgets.
        myMessageTextEditText = (EditText) findViewById(R.id.messageTextEditText);
        myResponseEditText = (EditText) findViewById(R.id.messageLengthEditText);
        mySendRequestBtn = (Button) findViewById(R.id.sendRequestBtn);

        // Subscribe to handle the button click.
        mySendRequestBtn.setOnClickListener(myOnSendRequestClickHandler);

        // Open the connection in another thread.
        // Note: From Android 3.1 (Honeycomb) or higher
        //       it is not possible to open TCP connection
        //       from the main thread.
        Thread anOpenConnectionThread = new Thread(new Runnable()
            {
                @Override
                public void run()
                {
                    try
                    {
                        openConnection();
                    }
                    catch (Exception err)
                    {
                        EneterTrace.error("Open connection failed.", err);
                    }
                }
            });
        anOpenConnectionThread.start();
    }

    @Override
    public void onDestroy()
    {
        // Stop listening to response messages.
        mySender.detachDuplexOutputChannel();

        super.onDestroy();
    } 

    private void openConnection() throws Exception
    {
        // Instantiate Protocol Buffer based serializer.
        ISerializer aSerializer = new ProtoBufSerializer();

        // Create sender sending MyRequest and as a response receiving MyResponse
        // The sender will use Protocol Buffers to serialize/deserialize messages. 
        IDuplexTypedMessagesFactory aSenderFactory = new DuplexTypedMessagesFactory(aSerializer);
        mySender = aSenderFactory.createDuplexTypedMessageSender(MyResponse.class, MyRequest.class);

        // Subscribe to receive response messages.
        mySender.responseReceived().subscribe(myOnResponseHandler);

        // Create TCP messaging for the communication.
        // Note: 10.0.2.2 is a special alias to the loopback (127.0.0.1)
        //       on the development machine.
        IMessagingSystemFactory aMessaging = new TcpMessagingSystemFactory();

        IDuplexOutputChannel anOutputChannel
            = aMessaging.createDuplexOutputChannel("tcp://10.0.2.2:8060/");
            //= aMessaging.createDuplexOutputChannel("tcp://192.168.1.102:8060/");

        // Attach the output channel to the sender and be able to send
        // messages and receive responses.
        mySender.attachDuplexOutputChannel(anOutputChannel);
    }

    private void onSendRequest(View v)
    {
        // Create the request message using ProtoBuf builder pattern.
        final MyRequest aRequestMsg = MyRequest.newBuilder()
                .setText(myMessageTextEditText.getText().toString())
                .build();

        // Send the request message.
        try
        {
            mySender.sendRequestMessage(aRequestMsg);
        }
        catch (Exception err)
        {
            EneterTrace.error("Sending the message failed.", err);
        }

    }

    private void onResponseReceived(Object sender,
                                    final TypedResponseReceivedEventArgs<MyResponse> e)
    {
        // Display the result - returned number of characters.
        // Note: Marshal displaying to the correct UI thread.
        myRefresh.post(new Runnable()
            {
                @Override
                public void run()
                {
                    myResponseEditText.setText(Integer.toString(e.getResponseMessage().getLength()));
                }
            });
    }

    private EventHandler<TypedResponseReceivedEventArgs<MyResponse>> myOnResponseHandler
            = new EventHandler<TypedResponseReceivedEventArgs<MyResponse>>()
    {
        @Override
        public void onEvent(Object sender,
                            TypedResponseReceivedEventArgs<MyResponse> e)
        {
            onResponseReceived(sender, e);
        }
    };

    private OnClickListener myOnSendRequestClickHandler = new OnClickListener()
    {
        @Override
        public void onClick(View v)
        {
            onSendRequest(v);
        }
    };
}

.NET Service Application

The .NET service is a simple console application listening to TCP and receiving requests to calculate the length of a given text.

The service uses EneterProtoBufSerializer. It instantiates the serializer and puts its reference to the DuplexTypedMessagesFactory ensuring so the message receiver will use Protocol Buffers to deserialize incoming messages and serialize response messages.

The whole implementation is very simple:

Collapse | Copy Code
using System;
using Eneter.Messaging.DataProcessing.Serializing;
using Eneter.Messaging.EndPoints.TypedMessages;
using Eneter.Messaging.MessagingSystems.MessagingSystemBase;
using Eneter.Messaging.MessagingSystems.TcpMessagingSystem;
using Eneter.ProtoBuf;
using message.declarations;

namespace ServiceExample
{
    class Program
    {
        private static IDuplexTypedMessageReceiver<MyResponse, MyRequest> myReceiver;

        static void Main(string[] args)
        {
            // Instantiate Protocol Buffer based serializer.
            ISerializer aSerializer = new ProtoBufSerializer();

            // Create message receiver receiving 'MyRequest' and receiving 'MyResponse'.
            // The receiver will use Protocol Buffers to serialize/deserialize messages. 
            IDuplexTypedMessagesFactory aReceiverFactory =
                new DuplexTypedMessagesFactory(aSerializer);
            myReceiver = aReceiverFactory.CreateDuplexTypedMessageReceiver<MyResponse, MyRequest>();

            // Subscribe to handle messages.
            myReceiver.MessageReceived += OnMessageReceived;

            // Create TCP messaging.
            IMessagingSystemFactory aMessaging = new TcpMessagingSystemFactory();

            IDuplexInputChannel anInputChannel
                = aMessaging.CreateDuplexInputChannel("tcp://127.0.0.1:8060/");

            // Attach the input channel and start to listen to messages.
            myReceiver.AttachDuplexInputChannel(anInputChannel);

            Console.WriteLine("The service is running. To stop press enter.");
            Console.ReadLine();

            // Detach the input channel and stop listening.
            // It releases the thread listening to messages.
            myReceiver.DetachDuplexInputChannel();
        }

        // It is called when a message is received.
        private static void OnMessageReceived(object sender,
                                              TypedRequestReceivedEventArgs<MyRequest> e)
        {
            Console.WriteLine("Received: " + e.RequestMessage.Text);

            // Create the response message.
            MyResponse aResponse = new MyResponse();
            aResponse.Length = e.RequestMessage.Text.Length;

            // Send the response message back to the client.
            myReceiver.SendResponseMessage(e.ResponseReceiverId, aResponse);
        }
    }
}

[codeproject]
Mar
18

SEO Strategies for Designers

Author admin    Category IT News, Tips & Trik     Tags

Should designers be expected to carry out SEO?

For years now we’ve heard the phrase “SEO is dead” being bandied around the net, most recently in an ill-advised article in UK newspaper The Guardian. At the time, the article caused something of a rumpus around the web from various SEO professionals who quickly printed some responses.

However, the idea that SEO is dead has been around for a long time, almost as long as the discipline itself. It’s fair to say that these days, SEO encompasses a range of techniques, rather than just optimizing a site, so it’s probably better described as digital marketing overall.

But what, if anything, does SEO mean to the web designer? Is it their job to ensure that the site has the correct meta information and on-page keywords? Or is it just the technicalities like site structure that they should concentrate on?

Offering more pays better

There’s little doubt that SEO is a widely used technique, in terms of design and marketing and so it stands to reason that the designer who can offer optimization will win better paid projects.

With this in mind, I thought I’d create a series of articles that look at the different aspects of SEO and what designers should really be carrying out as a minimum. For today, we’ll concentrate on the basics, such as:

  • Choosing a URL & structure
  • Keywords
  • Meta tags
  • Headers
  • Images

Content will of course be mostly up to the customer, but do bear in mind that many just can’t write. If you want to be able to offer a truly all-round service, then why not considering partnering with a freelancer or content agency so that you can?

URLs, how to choose and URL structure

Before looking in a little more depth at URLs, it’s worth giving a little thought to keywords. Now this is something that the client should already have carried out (research) if they have a good marketing plan, as they can be used over so many different platforms these days.

Keyword research takes time but if you have it, then use it and offer it as another service, you can always take on a virtual assistant with SEO skills if you get too busy. Whatever the case, when you begin the design process, ideally you should have the keywords that are going to be used at your disposal.

This brings us to URLs. Is this something that you provide for your clients along with hosting and domain registration? Perhaps you should, URLs should really be as short as possible, so that they’re memorable and make use of keywords where possible.

When it comes to structure, it’s safe to say that there are good URLs and bad ones, if you’re giving thought to SEO.

For example:

  • GOOD Format – www.website.com/other-page
  • POOR Format – www.website.com/44/otherpage/44735413

The ‘shorter the better’ rule stands here too and the URL should just describe the page/use the title of the target.

In the first instance, URLs should be used alongside market research in order to find the best, most searched for terms, which are relevant to the site’s industry and audience. These can then be used for creating page URLs depending on how the audience searches.

For example, do they shop by:

  • Brand
  • Product type
  • Product name

It’s much better to create a URL structure based on words, rather than numbers:

www.yourdomain.com/product.php?/product=2345

OR

www.yourdomain.com/saucepans

It’s easy enough to see which is preferable. This isn’t always possible with ecommerce sites though and will depend on databases and how they’ve been created. However, for information and content pages, they are a must for search. Page and folder file names are much more user friendly. Using hyphens is also good practice as it allows the URL to be quickly scanned by a user, showing them that their search is on the right track before they even arrive at the site.

Keywords and phrases can also be used here, for maximum search capabilities. If using a CMS, such as WordPress, then URLs can be edited with ease, as below.

url structure

Don’t forget to create 301 redirects if you’re carrying this out for an existing site in order to clean it up and make it more search engine (and usability) friendly.

Meta tags

This is so basic, it’s almost not worth mentioning, but I will anyway, just so we cover all bases. I’m not going to insult you by explaining what meta information is to you, as I can’t imagine any designer doesn’t already know.

Meta keywords have little in the way of any uses these days, thanks to the black hat practice of stuffing as many as possible in. However, it might not hurt to include a key phrase here, just to be sure.

For reference, meta information should be as below.

meta information

Remember that descriptions should be highly relevant and not overuse keywords, while titles should use keywords as close to the beginning as possible.

Keyword density

When it comes to keyword density, both in meta tags and on-page, I work by the premise of not really worrying too much about it. Unless I’m specifically asked by a client to keep to a certain density, I won’t, and even then I’d be more likely to tell them to ignore density.

A key phrase, based on keywords, in the meta information and on page, accompanied by a closely related phrase or two in the body of the text should be ample, depending on word count.

Google is very up for penalizing those that abuse keywords and it’s so easy to get a penalty that I feel erring on the side of caution to be the best approach. I’m also a writer and actively hate being made to force words in where they would be better occurring naturally. Write for people more than search engines should really be the premise of any design.

Headers

Again, this is something that every designer is more than familiar with, especially whilst everyone remains in love with typography. However, it’s worth pointing out that keywords that appear in H1, H2 tags and so on will help your SEO efforts.

If you can once again use keywords/phrases in these, it certainly won’t do any harm either.

Images and site speed

So we all know that images are one of the major obstacles to a speedy site and so these should be optimized so that they are as small as possible. This can be done using a variety of methods and these days, HTML5 and CSS3 can ensure that images don’t have to be the heaviest part of the site.

For responsive sites it’s also vital that performance is looked at, as unless optimized for performance, they can be very slow to load. However, a recent Moz study found that Google’s algorithms work on time to first byte (TTFB), rather than document loading and rendering.

The way that Google measures site speed and the fact that it’s just one tiny element of more than 200 algorithms means that you shouldn’t be too concerned when it comes to SEO and ranking, but you certainly should for users.

Usability is key when it comes to making conversions and I’m sure that you would rather build a site that increases your client’s sales than not. So it’s important to give users first consideration over what Google might thing, especially since it’s not really making any difference.

All of this is, hopefully, something that many of you are doing anyway, in the interests of good practice, with a few handy tips thrown in. A good site structure and hierarchy is essential to SEO, as is the content of the site and usability.

SEO tends to attract less than scrupulous people looking to make a quick buck. So, as an established designer, if you think you have the times and skills, to me it makes sense to offer SEO as you’re already trusted.

The question then becomes, are you happy freelancing, or could you see yourself heading up an agency that offers a complete design and optimization service? Not everyone will want the latter, but there’s little doubt that when it comes to digital marketing as a whole, opportunity doesn’t just exist, it’s actively knocking.

The role of social media

There’s little doubt that social media is becoming increasingly important to SEO, especially with regard to G+ and Google’s rumored plans for Authorship and Author Rank are likely to cement that.

But what role, if any, does the designer play when it comes to integrating social with the company website? Well that depends on both the designer and the client, but most sites will now have social media links included as a minimum.

That means that for the designer, the opportunity is there to come up with some great, eye-catching, unique designs when it comes to the images for the links. Add to this the need for branding to be identical cross-platform these days and the opportunity for designers is even more apparent.

Why integrate social?

Social signals, such as the amount of followers a site gains and the content that is shared from a site are important. As you know, in the dark days of post Penguin and Panda, content is even more important than it’s ever been and in order to measure engagement, it’s necessary to measure shares as well as traffic to the site.

Ideally, the client should have a sound marketing plan in place for you to work from. They will know which social platforms they are going to use, if they want to include social logins, and the relevance of placing icons on their site.

This means that the savvy designer can maximize on this and provide not only awesome icons on the site, but also great designs to grace the client profile on each social media site. Facebook gateway pages, for example, are a great marketing tool for encouraging people to give contact details and an opportunity for you to create eye-catching, tempting designs that can draw the user in.

User experience is everything

Well integrated social media channels are essential to the modern web experience and this means that designs should complement this. This can be done in a variety of ways and many of you will have noticed that many clients have moved away from a static design in order to replace it with variable content, which is usually powered by social media.

According to David Carillo, manager at Earned Media: “Implementing Facebook Open Graph and Twitter cards on a Website is the best way to control the presentation of your website (sic) on social networks. And it’s a lot easier to implement from the beginning than to have to go back once the site is already built out.”

Not only this, but the ability to embed content such as YouTube videos and SlideShare presentations means that you have a Google-friendly website in terms of multimedia content too. SlideShare presentations are becoming increasingly popular as a content marketing tool, with many preferring it over white papers now, as it’s a simple but effective way to present content in a form that’s easy to digest.

Tweets, Facebook reviews, LinkedIn recommendations, all of these can also be pulled from social to further boost trust between the client and their users. All of this means that you’re creating a site that is making the very most of social media in terms of usability, SEO, consumer trust and engagement.

Design opportunity and social

There’s nothing to say that you have to be restricted to using social plugins, or icons for each social media site. These can be incorporated into the design to match the branding of the company.
For example, the design below is a good example of using creativity to make branded social icon links, whilst keeping them entirely recognize as relating to each social network.

social networks

site with design integrated social

As you can see, it’s immediately apparent which network matches each button, but the designer has been clever and linked the icons to the style of the website. If we now go and check out the social media sites that are linked, you can see that the fishing company has done well with Twitter and carried on the theme, but failed somewhat on Facebook by just using a logo.

twitter
Twitter

facebook
Facebook

For most designers it’s a simple matter to design Twitter headers and Facebook cover images in Photoshop and the impact it has on the overall brand is well worth it.

Social plugins and logins

Social logins are a great way to include a call to action and create a community without asking too much of the user. These are not just an ideal way of making the site more social by allowing comments and suchlike, but they provide valuable and accurate data that can be used for marketing.

This can then be used for personalizing the user experience for content, product recommendations and more. This is invaluable for a company looking to streamline their lead generation but it can be a little problematic from the perspective of the developer.

This is because implementing each social API means using different protocols, such as OpenID and OAuth. Many of the networks, if not all, will have their own JavaScript library to support the API.

Of course, there are plenty of companies out there who provide social login connectivity as a service, but if you want to code your own there’s a good, detailed article on the MSDN site by Andrew Dodson.

For reference, Facebook remains the preferred network for social login across pretty much all industries, so you could begin with that one first. Bear in mind that it’s thought G+ will catch up by 2016, so you could also include a G+ login.

google+
(Source: http://janrain.com/blog/social-login-trends-across-the-web-for-q2-2013)

Further advantages to providing social login options with a site include doing away with forms. Social login performs the same function, which is simply to collect user information and build a mailing list, to name but one. Add to that the ability of mobile users to login with a single click, and you’re onto a winner.

With regard to plugins, whilst they can slow a site slightly, they are worth having for sharing purposes, as shares are counted by search engines as being useful.

Get your social skills perfected

In order to give clients the best possible service, every designer should be prepared to integrate social and know the reasons for doing so. The benefits are numerous to the client and include excellent branding opportunities, increased SEO and lead generation, an enhanced user experience and more.

With mobile becoming more and more common when it comes to what we use to browse the net, social can play a vital part too. Many sites are discovered through social media sites and so creating eye-catching designs for social media profiles is another way to increase traffic.

Social also gives a site the ability to host multimedia content and allows this to be shared in a two-way manner between the site and the social media presence.

At the end of the day, there’s no getting away from the social media revolution, so the best approach is to maximize the benefits it can give to your clients and in turn, your own business.

[sitepoint]

Mar
14

10 Programming Languages You Should Learn in 2014

Author admin    Category IT News, Programming     Tags

The tech sector is booming. If you’ve used a smartphone or logged on to a computer at least once in the last few years, you’ve probably noticed this.

As a result, coding skills are in high demand, with programming jobs paying significantly more than the average position. Even beyond the tech world, an understanding of at least one programming language makes an impressive addition to any resumé.

The in-vogue languages vary by employment sector. Financial and enterprise systems need to perform complicated functions and remain highly organized, requiring languages like Java and C#. Media- and design-related webpages and software will require dynamic, versatile and functional languages with minimal code, such as Ruby, PHP, JavaScript and Objective-C.

With some help from Lynda.com, we’ve compiled a list of 10 of the most sought-after programming languages to get you up to speed.

1. Java

java2

What it is: Java is a class-based, object-oriented programming language developed by Sun Micro systems in the 1990s. It’s one of the most in-demand programming languages, a standard for enterprise software, web-based content, games and mobile apps, as well as the Android operating system. Java is designed to work across multiple software platforms, meaning a program written on Mac OS X, for example, could also run on Windows.

Where to learn it: Udemy, Lynda.com, Oracle.com, LearnJavaOnline.org.

2. C Language

c2

What it is: A general-purpose, imperative programming language developed in the early ’70s, C is the oldest and most widely used language, providing the building blocks for other popular languages, such as C#, Java, JavaScript and Python. C is mostly used for implementing operating systems and embedded applications.

Because it provides the foundation for many other languages, it is advisable to learn C (and C++) before moving on to others.

Where to learn it: Learn-C, Introduction To Programming, Lynda.com, CProgramming.com,Learn C The Hard Way.

3. C++

cplusplus

What it is: C++ is an intermediate-level language with object-oriented programming features, originally designed to enhance the C language. C++ powers major software like Firefox, Winampand Adobe programs. It’s used to develop systems software, application software, high-performance server and client applications and video games.

Where to learn it: Udemy, Lynda.com, CPlusPlus.com, LearnCpp.com, CProgramming.com.

4. C#

csharp

What it is: Pronounced “C-sharp,” C# is a multi-paradigm language developed by Microsoft as part of its .NET initiative. Combining principles from C and C++, C# is a general-purpose language used to develop software for Microsoft and Windows platforms.

Where to learn it: Udemy, Lynda.com, Microsoft Virtual Academy, TutorialsPoint.com.

5. Objective-C

objectivec

What it is: Objective-C is a general-purpose, object-oriented programming language used by theApple operating system. It powers Apple’s OS X and iOS, as well as its APIs, and can be used to create iPhone apps, which has generated a huge demand for this once-outmoded programming language.

Where to learn it: Udemy, Lynda.com, Mac Developer Library, Cocoa Dev Central, Mobile Tuts+.

6. PHP

PHP

What it is: PHP (Hypertext Processor) is a free, server-side scripting language designed for dynamic websites and app development. It can be directly embedded into an HTML source document rather than an external file, which has made it a popular programming language for web developers. PHP powers more than 200 million websites, including WordPress, Digg andFacebook.

Where to learn it: Udemy, Codecademy, Lynda.com, Treehouse, Zend Developer Zone,PHP.net.

7. Python

python

What it is: Python is a high-level, server-side scripting language for websites and mobile apps. It’s considered a fairly easy language for beginners due to its readability and compact syntax, meaning developers can use fewer lines of code to express a concept than they would in other languages. It powers the web apps for Instagram, Pinterest and Rdio through its associated web framework, Django, and is used by Google, Yahoo! and NASA.

Where to learn it: Udemy, Codecademy, Lynda.com, LearnPython.org, Python.org.

8. Ruby

ruby

What it is: A dynamic, object-oriented scripting language for developing websites and mobile apps, Ruby was designed to be simple and easy to write. It powers the Ruby on Rails (or Rails) framework, which is used on Scribd, GitHub, Groupon and Shopify. Like Python, Ruby is considered a fairly user-friendly language for beginners.

Where to learn it: Codecademy, Code School, TryRuby.org, RubyMonk.

9. JavaScript

javascript

What it is: JavaScript is a client and server-side scripting language developed by Netscape that derives much of its syntax from C. It can be used across multiple web browsers and is considered essential for developing interactive or animated web functions. It is also used in game development and writing desktop applications. JavaScript interpreters are embedded in Google’s Chrome extensions, Apple’s Safari extensions, Adobe Acrobat and Reader, and Adobe’s Creative Suite.

Where to learn it: Codecademy, Lynda.com, Code School, Treehouse, Learn-JS.org.

10. SQL

sql2

What it is: Structured Query Language (SQL) is a special-purpose language for managing data in relational database management systems. It is most commonly used for its “Query” function, which searches informational databases. SQL was standardized by the American National Standards Institute (ANSI) and the International Organization for Standardization (ISO) in the 1980s.

[Mashable]

Mar
11

The 10 Most Popular Mobile Messaging Apps In The World

Author admin    Category IT News     Tags

With Facebook’s acquisition of WhatsApp last month, mobile messaging apps have taken center stage thanks to the sheer weight of their ever-expanding user bases. Such apps are colossal players in the mobile game, originating everywhere from Silicon Valley in California to Gurgaon, India.

Here are 10 international messaging apps whose worldwide influence is racking up millions of users from across the globe.

WhatsApp : United States

With 450 million monthly users, it’s clear why this US-based messaging giant has been the talk of the town. Now under Facebook’s ownership, WhatsApp is still available for iOS, Android, Windows Phone, BlackBerry and Symbian.

Viber : Cyprus

Over 300 million registered members use this Cyprus-based app. In February of 2014, Viber was bought by Japan-based e-commerce and Internet service company Rakuten for $900 million.

WeChat : China

WeChat is owned by China’s Tencent, one of China’s largest Internet service providers, and has amassed a following of 450 million monthly users since its founding in 2010.

Line : Japan

With their array of teen-friendly cartoon stickers, Japan’s Naver-owned Line app boasts over 350 million registered users.

KakaoTalk : South Korea

South Korea’s KakaoTalk has over 100 million registered users. This messaging app partnered with Evernote in 2013, in an effort to integrate the U.S. service onto the KakaoTalk mobile app.

Kik : Canada

University of Waterloo students founded Kik in 2009, which has gained a following of over 130 million registered users. Operating out of Ontario, Canada, this app boasts 200,000 new members per day.

Tango : United States

Silicon Valley-based messaging app Tango is being utilized in over 224 countries, and according to a Tango representative, is reaching 190 million registered users and growing.

Nimbuzz : India

150 million registered users utilize Nimbuzz, whose headquarters are located in Gurgaon, India. Founded in 2006, this app focuses on messaging, Voice over Internet Protocol, and social networking.

hike : India

Based out of India, it’s no wonder 60% of hike’s 15 million registered users come from the home country. The other 40% of users originate from Europe and the Middle East, proving a very diverse international appeal.

MessageMe : United States

Born out of San Francisco, this app was founded in 2012 and works to increase engagement by upping the communication experience through stickers, music, and photos. MessageMe has 5 million registered users and growing.

[rww]

Mar
6

Android : OS mobile tersukses sepanjang sejarah

Author admin    Category Android, Google, IT News     Tags

A Google executive claimed Wednesday that Android has seen the fastest and most successful adoption of any operating system in history.

Speaking at the Morgan Stanley Technology, Media and Telecom Conference, Nikesh Arora, senior vice president at Google, said the following, courtesy of Seeking Alpha:

I mean, look, in the history of operating systems, I think Android has been the quickest and most successful adoption of an operating system in the world. So you just sort of stop, take pause and say, oh my God, that’s crazy. Nobody could have ever predicted that we’re going to get an operating system adopted in an industry, which has so many different OEMs, manufacturing with their own operating systems having adopted around the world.

A report back in 2012 claimed that both Android and iOS were growing 10 times faster than PCs did in the 1980s.

And it’s clear that iOS on the iPhone and iPad had blistering adoption rates (with one study, back in 2010, showing iPad had the fastest adoption rate ever).

Also, the adoption rates of iOS upgrades, such as iOS 7, tend to outpace Android.

But recent data from IDC and App Annie (December 2013) show Android, for example, with a big lead over Apple in the installed base of smartphones (see chart at bottom), while Apple leads in game monetization.

And there are plenty of other studies too — usually focusing on smartphones — that show Android leading.

The success of apps on iOS, however, has been a strong suit for Apple, as a recent Piper Jaffray study, released in January, shows.

In the same report, though, Piper Jaffray argued that the quality of apps on the two platforms is now equalizing and that services will now be the key differentiator.

The initial release of Android was in September 2008. iOS made its debut in June 2007.

So, is Google, right? Maybe that’s best left to readers to debate.

[cnet]

Follow us on Twitter! Follow us on Twitter!
[Powered by Android]

Blogroll

Google Search :)

Calendar

April 2014
M T W T F S S
« Mar    
 123456
78910111213
14151617181920
21222324252627
282930  

Archives

Recent Posts