Browsing all articles from April, 2012
Apr
30

Microsoft Research wants to automate your house, introduces HomeOS

Author admin    Category IT News, Microsoft     Tags

microsoft-research-home-automation-homeOS

Ever wondered if you could control your house’s climate, security, and appliances — along with your PCs and peripherals — using Microsoft software? That day may soon dawn, as its Research arm has started testing its home automation software, called HomeOS, in twelve domiciles over the past few months. The budding system views smartphones, printers and air conditioners as network peripherals, controlled by a dedicated gateway computer. The project even has a handful of apps in play, which perform functions like energy monitoring, remote surveillance and face-recognition. This growing list of applications, available through a portal called “HomeStore”, will allow users to easily expand their system’s capabilities. So how does it all work out in the real world? Head past the break, and let Redmond’s research team give you the skinny.

 

[engadget]

Apr
27

Data Encryption Techniques

Author admin    Category IT News     Tags

Introduction

Often there has been a need to protect information from ‘prying eyes’. In the electronic age, information that could otherwise benefit or educate a group or individual can also be used against such groups or individuals. Industrial espionage among highly competitive businesses often requires that extensive security measures be put into place. And, those who wish to exercise their personal freedom, outside of the oppressive nature of governments, may also wish to encrypt certain information to avoid suffering the penalties of going against the wishes of those who attempt to control.

Still, the methods of data encryption and decryption are relatively straightforward, and easily mastered. I have been doing data encryption since my college days, when I used an encryption algorithm to store game programs and system information files on the university mini-computer, safe from ‘prying eyes’. These were files that raised eyebrows amongst those who did not approve of such things, but were harmless [we were always careful NOT to run our games while people were trying to get work done on the machine]. I was occasionally asked what this “rather large file” contained, and I once demonstrated the program that accessed it, but you needed a password to get to ‘certain files’ nonetheless. And, some files needed a separate encryption program to decipher them.

Methods of Encrypting Data

Traditionally, several methods can be used to encrypt data streams, all of which can easily be implemented through software, but not so easily decrypted when either the original or its encrypted data stream are unavailable. (When both source and encrypted data are available, code-breaking becomes much simpler, though it is not necessarily easy). The best encryption methods have little effect on system performance, and may contain other benefits (such as data compression) built in. The well-known ‘PKZIP®’ utility offers both compression AND data encryption in this manner. Also DBMS packages have often included some kind of encryption scheme so that a standard ‘file copy’ cannot be used to read sensitive information that might otherwise require some kind of password to access. They also need ‘high performance’ methods to encode and decode the data.

Encryption methods can be SYMMETRIC in which encryption and decryption keys are the same, or ASYMMETRIC (aka ‘Public Key’) in which encryption and decryption keys differ. ‘Public Key’ methods must be asymmetric, to the extent that the decryption key CANNOT be easily derived from the encryption key. Symmetric keys, however, usually encrypt more efficiently, so they lend themselves to encrypting large amounts of data. Asymmetric encryption is often limited to ONLY encrypting a symmetric key and other information that is needed in order to decrypt a data stream, and the remainder of the encrypted data uses the symmetric key method for performance reasons. This does not in any way diminish the security nor the ability to use a public key to encrypt the data, since the symmetric key method is likely to be even MORE secure than the asymmetric method.

For symmetric key ciphers, there are basically two types: BLOCK CIPHERS, in which a fixed length block is encrypted, and STREAM CIPHERS, in which the data is encrypted one ‘data unit’ (typically 1 byte) at a time, in the same order it was received in. Fortunately, the simplest of all of the symmetric key ‘stream cipher’ methods is the TRANSLATION TABLE (or ‘S table’), which should easily meet the performance requirements of even the most performance-intensive application that requires data to be encrypted. In a translation table, each ‘chunk’ of data (usually 1 byte) is used as an offset within one or more arrays, and the resulting ‘translated’ value is then written into the output stream. The encryption and decryption programs would each use a table that translates to and from the encrypted data. 80×86 CPU’s have an instruction ‘XLAT’ that lends itself to this purpose.

While translation tables are very simple and fast, the down side is that once the translation table is known, the code is broken. Further, such a method is relatively straightforward for code breakers to decipher – such code methods have been used for years, even before the advent of the computer. Still, for general “unreadability” of encoded data, without adverse effects on performance, the ‘translation table’ method lends itself well.

A modification to the ‘translation table’ uses 2 or more tables, based on the position of the bytes within the data stream, or on the data stream itself. Decoding becomes more complex, since you have to reverse the same process reliably. But, by the use of more than one translation table, especially when implemented in a ‘pseudo-random’ order, this adaptation makes code breaking relatively difficult. An example of this method might use translation table ‘A’ on all of the ‘even’ bytes, and translation table ‘B’ on all of the ‘odd’ bytes. Unless a potential code breaker knows that there are exactly 2 tables, even with both source and encrypted data available the deciphering process is relatively difficult.

Similar to using a translation table, ‘data repositioning’ lends itself to use by a computer, but takes considerably more time to accomplish. This type of cipher would be a trivial example of a BLOCK CIPHER. A buffer of data is read from the input, then the order of the bytes (or other ‘chunk’ size) is rearranged, and written ‘out of order’. The decryption program then reads this back in, and puts them back ‘in order’. Often such a method is best used in combination with one or more of the other encryption methods mentioned here, making it even more difficult for code breakers to determine how to decipher your encrypted data. As an example, consider an anagram. The letters are all there, but the order has been changed. Some anagrams are easier than others to decipher, but a well written anagram is a brain teaser nonetheless, especially if it’s intentionally misleading.

My favorite methods, however, involve something that only computers can do: word/byte rotation and XOR bit masking. This is very common since it has relatively high ENTROPY in the resulting cipher. High entropy data is difficult to extract information from, and the higher the entropy, the better the cipher. So, if you rotate the words or bytes within a data stream, using a method that involves multiple and variable direction and duration of rotation, in an easily reproducable pattern, you can quickly encode a stream of data with a method that can be nearly impossible to break. Further, if you use an ‘XOR mask’ in combination with this (‘flipping’ the bits in certain positions from 1 to 0, or 0 to 1) you end up making the code breaking process even more difficult. The best combination would also use ‘pseudo random’ effects, the easiest of which might involve a simple sequence like Fibbonaci numbers, which can appear ‘pseudo-random’ after many iterations of ‘modular’ arithmetic (i.e. math that ‘wraps around’ after reaching a limit, like integer math on a computer). The Fibbonaci sequence ’1,1,2,3,5,…’ is easily generated by adding the previous 2 numbers in the sequence to get the next. Doing modular arithmetic on the result and operating on multiple byte sequences (using a prime number of bytes for block rotation, as one example) would make the code breaker’s job even more difficult, adding the ‘pseudo-random’ effect that is easily reproduced by your decryption program.

In some cases, you may want to detect whether data has been tampered with, and encrypt some kind of ‘checksum’ or CRC into the data stream itself. This is useful not only for authorization codes and licenses (where encrypted data is expected to be used) but also for programs themselves. A virus that infects such a ‘protected’ program is likely to neglect any encryption algorithm and authorization/checksum signature that has been written to the executable binary file(s). The program (and any dynamic library) could then check itself each time it loads, and thus detect the presence of file corruption. Such a method would have to be kept VERY secret, to prevent virus programmers from exploiting it at some point.

Public Key Encryption Algorithms

One very important feature of a good encryption scheme is the ability to specify a ‘key’ or ‘password’ of some kind, and have the encryption method alter itself such that each ‘key’ or ‘password’ produces a unique encrypted output, one that also requires a unique ‘key’ or ‘password’ to decrypt. This can either be a symmetric or asymmetric key. The popular ‘PGP’ public key encryption, and the ‘RSA’ encryption that it’s based on, uses an ‘asymmetrical’ key, allowing you to share the ‘public’ encryption key with everyone, while keeping the ‘private’ decryption key safe. The encryption key is significantly different from the decryption key, such that attempting to derive the private key from the public key involves too many hours of computing time to be practical. It would NOT be impossible, just highly unlikely, which is ‘pretty good’.

There are few operations in mathematics that are truly ‘irreversible’. In nearly all cases, the commutative property or an ‘inverse’ operation applies. if an operation is performed on ‘a’, resulting in ‘b’, you can perform an equivalent operation on ‘b’ to get ‘a’. In some cases you may get the absolute value (such as a square root), or the operation may be undefined (such as dividing by zero). However, it may be possible to base an encryption key on an algorithm such that you cannot perform a direct calculation to get the decryption key. An operation that would cause a division by zero would PREVENT a public key from being directly translated into a private key. As such, only ‘trial and error’ (otherwise known as a ‘brute force’ attack) would remain as a valid ‘key cracking’ method, and it would therefore require a significant amount of processing time to create the private key from the public key.

In the case of the RSA encryption algorithm, it uses very large prime numbers to generate the public key and the private key. Although it would be possible to factor out the public key to get the private key (a trivial matter once the 2 prime factors are known), the numbers are so large as to make it very impractical to do so. The encryption algorithm itself is ALSO very slow, which makes it impractical to use RSA to encrypt large data sets. So PGP (and other RSA-based encryption schemes) encrypt a symmetrical key using the public key, then encrypt the remainder of the data with a faster algorithm using the symmetrical key. The symmetrical itself key is randomly generated, so that the only (theoretical) way to get it would be by using the private key to decrypt the RSA-encrypted symmetrical key.

Example:  Suppose you want to encrypt data (let’s say this web page) with a key of 12345.  Using your public key, you RSA-encrypt the 12345, and put that at the front of the data stream (possibly followed by a marker or preceded by a data length to distinguish it from the rest of the data).  THEN, you follow the ‘encrypted key’ data with the encrypted web page text, encrypted using your favorite method and the key ’12345′.  Upon receipt, the decrypt program looks for (and finds) the encrypted key, uses the ‘private key’ to decrypt it, and gets back the ’12345′.  It then locates the beginning of the encrypted data stream, and applies the key ’12345′ to decrypt the data.  The result:  a very well protected data stream that is reliably and efficiently encrypted, transmitted, and decrypted.

Source files for a simple RSA-based encryption algorithm can be found HERE:
ftp://ftp.funet.fi/pub/crypt/cryptography/asymmetric/rsa

It is somewhat difficult to write a front-end to get this code to work (I have done so myself), but for the sake of illustration, the method actually DOES work and by studying the code you can understand the processes involved in RSA encryption.  RSA, incidentally, is reportedly patented through the year 2000, and may be extended beyond that, so commercial use of RSA requires royalty payments to the patent holder (www.rsa.com).  But studying the methods and experimenting with it is free, and with source code being published in print (PGP) and outside the U.S., it’s a good way to learn how it works, and maybe to help you write a better algorithm yourself.

A ‘multi-phase’ S Table method, invented by me

Some time ago, in the late 1990′s, developed and tested an encryption method that is (in my opinion) nearly uncrackable, so long as it is implemented properly. The reasons why will be pretty obvious when you take a look at the method itself. I originally explained it in prose due to the way US export laws were written at the time. Fortunately, for open source, it appears that it is only necessary to inform the right people of the URL (or to send them a copy of the source), which is a LOT more reasonable than it used to be. So thumb-on-nose aside, the encryption method itself is relatively straightforward, and reasonably strong (I recently analyzed it using some standard statistical methods and the entropy is pretty good, especially when I made a minor alteration to correct certain trivial cases that resulted in poor entropy, such as a block of zeros). According to THIS web site, my algorithm (which I have dubbed ‘pelcg’, the ROT13 for ‘crypt’) could be described as an asynchronous stream cipher with a symmetrical key)
So, here goes (description of method first made public on June 1, 1998):

  • Using a set of numbers (let’s say a 128-bit key, or 256-bit key if you use 64-bit integers), generate a repeatable but highly randomized pseudo-random number sequence (see below for an example of a pseudo-random number generator).
  • 256 entries at a time, use the random number sequence to generate arrays of “cipher translation tables” as follows:
    • fill an array of integers with 256 random numbers (see below)
    • sort the numbers using a method (like pointers) that lets you know the original position of the corresponding number
    • using the original positions of the now-sorted integers, generate a table of randomly sorted numbers between 0 and 255. If you can’t figure out how to make this work, you could give up now… but on a kinder note, I’ve supplied some source below to show how this might be done – generically, of course.
  • Now, generate a specific number of 256-byte tables. Let the random number generator continue “in sequence” for all of these tables, so that each table is different.
  • Next, use a “shotgun technique” to generate “de-crypt” cipher tables. Basically, if a maps to b, then b must map to a. So, b[a[n]] = n. get it? (‘n’ is a value between 0 and 255). Assign these values in a loop, with a set of 256-byte ‘decrypt’ tables that correspond to the 256-byte ‘encrypt’ tables you generated in the preceding step.

    NOTE: I first tried this on a P5 133Mhz machine, and it took 1 second to generate the 2 256×256 tables (128kb total). With this method, I inserted additional randomized ‘table order’, so that the order in which I created the 256-byte tables were part of a 2nd pseudo-random sequence, fed by 2 additional 16-bit keys.

  • Now that you have the translation tables, the basic cipher works like this: the previous byte’s encrypted value is the index of the 256-byte translation table. Alternately, for improved encryption, you can use more than one byte, and either use a ‘checksum’ or a CRC algorithm to generate the index byte. You can then ‘mod’ it with the # of tables if you use less than 256 256-byte tables. Assuming the table is a 256×256 array, it would look like this:

    crypto1 = a[crypto0][value]
    NOTE: this has a weakness for ‘blob of zeros’ resulting in poor entropy.
    Alterering ‘value’ first with a rotating XOR mask would help correct this.
    You would also need to make a similar change for the decrypt operation.

    where ‘crypto1′ is the encrypted byte, and ‘crypto0′ is the previous byte’s encrypted value (or a function of several previous values). Naturally, the 1st byte will need a “seed”, which must be known. This may increase the total cipher size by an additional 8 bits if you use 256×256 tables. Or, you can use the key you generated the random list with, perhaps taking the CRC of it, or using it as a “lead in” encrypted byte stream. Incidentally, I have tested this method using 16 ‘preceding’ bytes to generate the table index, starting with the 128-bit key as the initial seed of ’16 previous bytes’. I was then able to encrypt about 100kbytes per second with this algorithm, after the initial time delay in creating the table.

  • On the decrypt, you do the same thing. Just make sure you use ‘encrypted’ values as your table index both times. Or, use ‘decrypted’ values if you’d rather. They must, of course, match.

The pseudo-random sequence can be designed by YOU to be ANYTHING that YOU want. Without details on the sequence, the source code, or the compiled binary image, the cipher key itself is worthless. PLUS, a block of identical ascii characters will translate into random garbage with (potentially) high entropy, each byte depending upon the encrypted value of the preceding byte (which is why I use the ENCRYPTED value, not the actual value, as the table index). You’ll get a random set of permutations for any single character, permuations that are of random length, that effectively hide the true size of the cipher.

However, if you’re at a loss for a random sequence consider a FIBBONACCI sequence, using 2 DWORD’s (like from your encryption key) as “seed” numbers, and possibly a 3rd DWORD as an ‘XOR’ mask. An algorithm for generating a random sequence of numbers, not necessarily connected with encrypting data, might look as follows:

  unsigned long dw1, dw2, dw3, dwMask;
  int i1;
  unsigned long aRandom[256];

  dw1 = {seed #1};
  dw2 = {seed #2};
  dwMask = {seed #3};
  // this gives you 3 32-bit "seeds", or 96 bits total

  for(i1=0; i1 < 256; i1++)
  {
    dw3 = (dw1 + dw2) ^ dwMask;
    aRandom[i1] = dw3;
    dw1 = dw2;
    dw2 = dw3;
  }

If you wanted to generate a list of random sequence numbers, let’s say between zero and the total number of random numbers in the list, you could try something like THIS:

int __cdecl MySortProc(void *p1, void *p2)
{
  unsigned long **pp1 = (unsigned long **)p1;
  unsigned long **pp2 = (unsigned long **)p2;

  if(**pp1 < **pp2)
    return(-1);
  else if(**pp1 > *pp2)
    return(1);

  return(0);
}

...

  int i1;
  unsigned long *apRandom[256];
  unsigned long aRandom[256];  // same array as before, in this case
  int aResult[256];  // results go here

  for(i1=0; i1 < 256; i1++)
  {
    apRandom[i1] = aRandom + i1;
  }

  // now sort it
  qsort(apRandom, 256, sizeof(*apRandom), MySortProc);

  // final step - offsets for pointers are placed into output array
  for(i1=0; i1 < 256; i1++)
  {
    aResult[i1] = (int)(apRandom[i1] - aRandom);
  }

...

The result in ‘aResult’ should be a randomly sorted (but unique) array of integers with values between 0 and 255, inclusive. Such an array could be useful, for example, as a byte for byte translation table, one that could easily and reliably be reproduced based solely upon a short length key (in this case, the random number generator seed); however, in the spirit of the ‘GUTLESS DISCLAIMER’ (below), such a table could also have other uses, perhaps as a random character or object positioner for a game program, or as a letter scrambler for an anagram generator.

GUTLESS DISCLAIMER: The sample code above does not in and of itself constitute an encryption algorithm, or necessarily represent a component of one. It is provided solely for the purpose of explaining some of the more obscure concepts discussed in prose within this document. Any other use is neither proscribed nor encouraged by the author of this document, S.F.T. Inc., or any individual or organization that is even remotely connected with this web site.

Weaknesses in Encryption

An encryption method might seem ‘safe’ on the outside, and even accept a ridiculously large key, but if the data it generates is NOT ‘random’ in appearance, there is a possibility to develop a method that exploits the ‘non-random’ patterns to greatly reduce the amount of time it would take to ‘crack’ the cipher. This kind of exploitation has already been demonstrated in several instances.

One particular method that can be used to reveal weakness is a statistical analysis of the results of the encryption. This can be done with the original or without. A method involving a statistical breakdown of byte patterns, such as the number of times any particular value appears in the encrypted output, would quickly reveal whether any potential patterns might exist. Similar ‘byte A follows B’ analysis could reveal the same kinds of weaknesses. This sort of analysis could even be done with a SPREADSHEET application, where high standard deviation would indicate poor entropy. Ideally, the algorithm would have an entropy similar to that of a truly random sequence. So performing your analysis FIRST on random numbers (try /dev/urandom on non-windows systems), and THEN applying the same analysis to the output of the encryption algorithm, would give you a nice indication of just how much entropy your algorithm has.

Another method involves ‘predictability’. If you know that a particular sequence of data results in a particular pattern in the encryption stream, you can use these patterns to partially decrypt the content. Once a partial decrypt has been performed, knowledge of the algorithm may be enough to help you generate the key that created the cipher stream. This technique was used to help crack ‘Enigma’ back in World War 2, by the Bletchley Park team. Commonly used phrases like ‘Heil Hitler’ were used in their analysis, which in many ways is ironic. They used paper cards to create what they called ‘cribs’ to help visually locate these patterns within the encrypted data. It got to the point where they could read the encrypted information in ‘real time’, sometimes before the recipient got his copy of the unencrypted message.

Conclusion

Because of the need to ensure that only those eyes intended to view sensitive information can ever see this information, and to ensure that the information arrives un-altered, security systems have often been employed in computer systems for governments, corporations, and even individuals. Encryption schemes can be broken, but making them as hard as possible to break is the job of a good cipher designer. All you can really do is make it very very difficult for the code breaker to decipher your cipher. Still, as long as both source and encrypted data are available, it will always be possible to break your code. It just won’t necessarily be easy.

[mrp3]

Apr
27

Encryption methods

Author admin    Category IT News     Tags

For request generator binding settings, the encryption methods include specifying the data and key encryption algorithms to use to encrypt the SOAP message. The WSS API for encryption (WSSEncryption) specifies the algorithm name and the matching algorithm uniform resource identifier (URI) for the data and key encryption methods. If the data and key encryption algorithms are specified, only elements that are encrypted with those algorithms are accepted.

Data encryption algorithms

The data encryption algorithm is used to encrypt parts of the SOAP message, including the body and the signature. Data encryption algorithms specify the algorithm uniform resource identifier (URI) for each type of data encryption algorithms.

The following pre-configured data encryption algorithms are supported:

Table 1. Data encryption algorithms. The algorithms are used to encrypt SOAP messages.
Data encryption algorithm name Algorithm URI
WSSEncryption.AES128 (the default value) A URI of data encryption algorithm, AES 128: http://www.w3.org/2001/04/xmlenc#aes128-cbc
WSSEncryption.AES192 A URI of data encryption algorithm, AES 192: http://www.w3.org/2001/04/xmlenc#aes192-cbc
WSSEncryption.AES256 A URI of data encryption algorithm, AES 256: http://www.w3.org/2001/04/xmlenc#aes256-cbc
WSSEncryption.TRIPLE_DES A URI of data encryption algorithm, TRIPLE DES: http://www.w3.org/2001/04/xmlenc#tripledes-cbc

By default, the Java Cryptography Extension (JCE) is shipped with restricted or limited strength ciphers. To use 192-bit and 256-bit Advanced Encryption Standard (AES) encryption algorithms, you must apply unlimited jurisdiction policy files.

Important: Your country of origin might have restrictions on the import, possession, use, or re-export to another country, of encryption software. Before downloading or using the unrestricted policy files, you must check the laws of your country, its regulations, and its policies concerning the import, possession, use, and re-export of encryption software, to determine if it is permitted.

For the AES256-cbc and the AES192-CBC algorithms, you must download the unrestricted Java™ Cryptography Extension (JCE) policy files from the following Web site: http://www.ibm.com/developerworks/java/jdk/security/index.html.

The data encryption algorithm configured for encryption for the generator side must match the data encryption algorithm that is configured for decryption for the consumer side.

Key encryption algorithms

This algorithm is used to encrypt and decrypt keys. This key information is used to specify the configuration that is needed to generate the key for digital signature and encryption. The signing information and encryption information configurations can share the key information. The key information on the consumer side is used for specifying the information about the key that is used for validating the digital signature in the received message or for decrypting the encrypted parts of the message. The request generator is configured for the client.

Note: Policy sets do not support symmetric key encryption. If you are using the WSS API for symmetric key encryption, you will not be able to interoperate with Web services endpoints using the policy sets.

Key encryption algorithms specify the algorithm uniform resource identifier (URI) of the key encryption method. The following pre-configured key encryption algorithms are supported:

Table 2. Supported pre-configured key encryption algorithms. The algorithms are used to encrypt and decrypt keys.
WSS API URI
WSSEncryption.KW_AES128 A URI of key encryption algorithm, key wrap AES 128: http://www.w3.org/2001/04/xmlenc#kw-aes128
WSSEncryption.KW_AES192 A URI of key encryption algorithm, key wrap AES 192: http://www.w3.org/2001/04/xmlenc#kw-aes192

Restriction: Do not use the 192-bit key encryption algorithm if you want your configured application to be in compliance with the Basic Security Profile (BSP).
WSSEncryption.KW_AES256 A URI of key encryption algorithm, key wrap AES 256: http://www.w3.org/2001/04/xmlenc#kw-aes256
WSSEncryption.KW_RSA_OAEP (the default value) A URI of key encryption algorithm, key wrap RSA OAEP: http://www.w3.org/2001/04/xmlenc#rsa-oaep-mgf1p
WSSEncryption.KW_RSA15 A URI of key encryption algorithm, key wrap RSA 1.5: http://www.w3.org/2001/04/xmlenc#rsa-1_5
WSSEncryption.KW_TRIPLE_DES A URI of key encryption algorithm, key wrap TRIPLE DES: http://www.w3.org/2001/04/xmlenc#kw-tripledes
For Secure Conversation, additional key-related information must be specified, such as:

  • algorithmName
  • keyLength
By default, the RSA-OAEP algorithm uses the SHA1 message digest algorithm to compute a message digest as part of the encryption operation. Optionally, you can use the SHA256 or SHA512 message digest algorithm by specifying a key encryption algorithm property. The property name is: com.ibm.wsspi.wssecurity.enc.rsaoaep.DigestMethod. The property value is one of the following URIs of the digest method:

  • http://www.w3.org/2001/04/xmlenc#sha256
  • http://www.w3.org/2001/04/xmlenc#sha512

By default, the RSA-OAEP algorithm uses a null string for the optional encoding octet string for the OAEPParams. You can provide an explicit encoding octet string by specifying a key encryption algorithm property. For the property name, you can specify com.ibm.wsspi.wssecurity.enc.rsaoaep.OAEPparams. The property value is the base 64-encoded value of the octet string.

Important: You can set these digest method and OAEPParams properties on the generator side only. On the consumer side, these properties are read from the incoming SOAP message.

For the KW-AES256 and the KW-AES192 key encryption algorithms, you must download the unrestricted JCE policy files from the following Web site: http://www.ibm.com/developerworks/java/jdk/security/index.html.

The key encryption algorithm for the generator must match the key decryption algorithm that is configured for the consumer.

Example

This example provides sample code for encryption to use the Triple DES for the data encryption method and to use RSA1.5 for the key encryption method:

	  // get the message context
	  Object msgcontext = getMessageContext();

	  // generate WSSFactory instance
	  WSSFactory factory = WSSFactory.getInstance();		

	  // generate WSSGenerationContext instance
	  WSSGenerationContext gencont = factory.newWSSGenerationContext();

	  // generate callback handler
	  X509GenerateCallbackHandler callbackHandler = new X509GenerateCallbackHandler(
			  "",
			  "enc-sender.jceks",
			  "jceks", 
			  "storepass".toCharArray(), 
			  "bob", 
			  null, 
			  "CN=Bob, O=IBM, C=US", 
			  null);

	  // generate the security token used to the encryption
	  SecurityToken token = factory.newSecurityToken(X509Token.class, 
        callbackHandler);

	  // generate WSSEncryption instance to encrypt the SOAP body content
	  WSSEncryption enc = factory.newWSSEncryption(token);
	  enc.addEncryptPart(WSSEncryption.BODY_CONTENT);

	  // set the data encryption method
	  // DEFAULT: WSSEncryption.AES128
	  enc.setEncryptionMethod(WSSEncryption.TRIPLE_DES);

	  // set the key encryption method
	  // DEFAULT: WSSEncryption.KW_RSA_OAEP
	  enc.setEncryptionMethod(WSSEncryption.KW_RSA15);

	  // add the WSSEncryption to the WSSGenerationContext
	  gencont.add(enc);

	  // generate the WS-Security header
	  gencont.process(msgcontext);


[ibm]
Apr
26

Macs Spread Malware To PCs

Author admin    Category Apple, IT News, Malware, Security     Tags

 

 

Call it Steve Jobs’ revenge. Security vendor Sophos has discovered that one in five Mac computers surveyed carry malware that could infect Windows PCs. In a bit of delicious irony, only one in 36 Apple computers were found to be infected with Mac OS X malware. The results bring an odd sense of urgency to worries about Mac security.

Macs As “Patient Zero”

While Windows malware can’t damage a Mac, UK-based Sophos encourages Mac users to be “a responsible member of society” by ensuring their systems don’t infect other computers. In a tacky comparison, the security company compared an infected Mac to a person who has Chlamydia, a sexually transmitted disease that carriers often don’t know they have until they get tested.

Like many Chlamydia victims, Mac owners “are doing a pretty poor job” in keeping their systems clean, writes Graham Cluley, senior technology consultant at Sophos, in the company’s blog. Some of the malware discovered on Macs dated back to 2007 and would have easily been detected if the users had run anti-virus software.

Much as on a Windows PC, malware can infect Macs via USB drives, email attachments or even just by visiting a compromised website. Sophos has even seen malicious Web sites that secretly install malware on Macs with un-patched software.

Mac users take bigger cyber-risks not because their machines are invulnerable to attack (some experts claim they’re actually more vulnerable than Windows PCs) but because cybercriminals have ignored Apple systems for decades. Only in the last few years has the number of Macs on the Internet reached a level that it draws the interest of serious malware creators. “Sadly, cybercriminals view Macs as a soft target, because their owners are less likely to be running anti-virus,” Cluley notes.

Is the Free Ride Over?

That’s certainly true, but the reality is that Mac users have pretty much gotten away with lax security, so there was little incentive. And unless Mac users are feeling altruistic (not likely given Apple’s long-running ad campaigns ridiculing PC users)- or running Windows and Windows programs on their machines – there still isn’t much incentive. At least for now.

If that ever changes, it could be due to the deeper pockets of the average Mac user. If Apple customers can afford to pay a premium for the company’s computers, then cybercriminals may believe there’s greater profit in stealing passwords to an online banking site visited with a Mac. “They might believe the potential for return is much higher,” according to Cluley says.

In the meantime, though, Mac malware is pretty much the same as Windows malware. Slightly more than three in four of the Mac malware Sophos discovered targeted a vulnerability in the Java platform that Apple patched this month, nearly two months after a fix was available for Windows PCs. The password-stealing malware, called Flashback, had infected more than 600,000 Macs, roughly 1% of all in use, before Apple started working with Internet service providers to take offline servers suspected of spreading the malware.

After Flashback, the second most popular malware were pop up screens on Web sites that pretend to find viruses on visitors’ computers and then try to scare them into buying malware disguised as removal tools.

[RWW]

Apr
25

Google Drive (Cloud )Resmi Meluncur, Gratis Simpan 5GB Online

Author admin    Category Cloud, Google, IT News     Tags

Setelah berada di dunia rumor cukup lama, Google Drive akhirnya resmi dirilis beberapa jam yang lalu. Layanan ini akan memberikan tempat penyimpanan berkas gratis online sebesar 5 GB yang dapat ditingkatkan hingga 1 TB dengan membayar biaya bulanan.

Antarmuka Google Drive berbasis Google Docs dan pemakainya akan bisa berkolaborasi dengan orang lain untuk mengerjakan dokumen, spreadsheet, hingga presentasi. Sistem kolaborasi dan berbagi ini juga didukung dengan fitur komentar yang akan ternotifikasi jika ada yang baru.

Drive saat ini hadir dalam versi Windows dan Mac OS X untuk komputer dan dalam aplikasi Android untuk perangkat mobile. Sementara itu pengguna iPhone dan iPad juga akan segera memperoleh aplikasi ini, jika melihat informasi yang dicantumkan oleh Google.

Setelah Drive terpasang, pengguna akan memperoleh daftar semua dokumen dari Google Docs dan semua berkas dalam folder tertentu yang ada di komputer. Berkas ini juga akan terlihat dalam daftar yang ada di aplikasi mobile.

Google juga menyematkan fitur pencarian yang menjadi inti bisnisnya selama ini ke dalam Drive. Pengguna bisa mencari berkas berdasarkan kata kuncinya, jenis berkas, pemiliknya, dan seterusnya. Drive bahkan bisa mengenali teks yang berasal dari dokumen yang dipindai menggunakan teknologi OCR, bahkan hingga pengenalan gambar. Misalnya ada gambar dari bangunan terkenal seperti Colosseum, maka Drive akan bisa menemukan gambar tersebut ketika pengguna mencarinya.

 

Google memberikan opsi penambahan jumlah penyimpanan dengan membayar bulanan. Untuk tambahan 25GB maka pengguna harus membayar 2,49 dolar per bulan, 100 GB membayar 4,99 dolar / bulan, dan 1 TB sebesar 49,99 dolar / bulan. Bonusnya adalah jika pengguna melakukan upgrade maka jumlah ukuran penyimpanan di Gmail juga akan naik 25GB.

Untuk mendukung Drive, Google juga telah mengintegrasikannya ke layanan lain seperti Google+ dan Gmail. Google juga mengatakan bahwa pengguna nantinya akan bisa berbagi berkas dari Drive ke Google+ maupun melampirkannya langsung ke Gmail. Google juga telah bekerjasama dengan berbagai perusahaan lain untuk mengintegrasikan layanan ini ke dalam produk mereka seperti Wevideo, Aviary untuk editing suara, hingga Hellofax dan Lucidchart.

 

[teknoup]

Apr
25

Microsoft creates Cloud comparison chart (SkyDrive, iCloud, Good Drive and Dropbox)

Author admin    Category Cloud, IT News     Tags

 

With the announcement of Google Drive and yesterday’s major update to Skydrive, you might be wondering how all of the products look when you place them head to head.

We already took a quick peek, of course, that was at Google Drive and Skydrive’s TOS which highlighted each products stance on privacy, but Microsoft has conveniently put together a chart to show why Skydrive is best.

Yes, we know this has a Microsoft slant as it comes from the company but let’s be honest, they aren’t exactly digging deep to make the comparisons to prove their point.

The chart speaks for itself and compares SkyDrive, iCloud, Good Drive and Dropbox in a head-to-head matchup that clearly puts SkyDrive in the spotlight. Microsoft touts that SkyDrive works seamlessly with Office and Windows:

  • Save a document from Microsoft Word on your PC to your SkyDrive folder. Keep writing at work using SkyDrive.com and Word Web App. Unlike Google, there are no conversions and no formatting issues.
  • Use fetch to access any file – not just ones in your SkyDrive folder – on your Windows PC from anywhere
  • Access and save to your SkyDrive from any app in Windows 8 – automatically

[neowin]

Apr
24

Where IT is going: Cloud, mobile, and data

Author admin    Category IT News     Tags

 

Cloud computing seems to often get used as a catch-all term for the big trends happening in IT.

This has the unfortunate effect of adding additional ambiguities to a topic that’s already laden with definitional overload. (For example, on a topic like security or compliance, it makes a lot of difference whether you’re talking about public clouds like Amazon’s, a private cloud within an enterprise, a social network, or some mashup of two or more of the above.)

However, I’m starting to see a certain consensus emerge about how best to think about the broad sense of cloud, which is to say IT’s overall trajectory. It doesn’t have a catchy name; when it’s labeled at all, it’s usually “Next Generation IT” or something equally innocuous. It views IT’s future as being shaped by three primary forces. While there are plenty of other trends and technology threads in flight, most of them fit pretty comfortably within this framework.

The three big trends? Cloud computing, mobility, and “big data.”

Through the lens of next-generation IT, think of cloud computing as being about trends in computer architectures, how applications are loaded onto those systems and made to do useful work, how servers communicate with each other and with the outside world, and how administrators manage and provide access. This trend also encompasses all the infrastructure and “plumbing” that makes it possible to effectively coordinate data centers full of systems increasingly working as a unified compute resource as opposed to islands of specialized capacity.

Cloud computing in this sense embodies all the big changes in back-end computation. Many of these relate to Moore’s Law, Intel co-founder Gordon Moore’s 1965 observation that the number of transistors it’s economically possible to build into an integrated circuit doubles approximately every two years. This exponential increase in the density of the switches at the heart of all computer logic has led to corresponding increases in computational power. (Although the specific ways that transistors get turned into performance has shifted over time.)

Moore’s Law has also had indirect consequences. Riding Moore’s Law requires huge investments in both design and manufacturing. Intel’s next-generation Fab 42 manufacturing facility in Arizona is expected to cost more than $5 billion to build and equip. Although not always directly related to Moore’s Law, other areas of the computing “stack” — especially in hardware such as disk drives — require similarly outsized investments. The result has been an industry oriented around horizontal specialties such as chips, servers, disk drives, storage arrays, operating systems, and databases rather than, as was once the case, integrated systems designed and built by a single vendor.

Cloud computing, mobility, and big data are the three big trends shaping the evolution of how computing gets done.

(Credit: Gordon Haff)

This industry structure implies standardization with a relatively modest menu of mainstream choices within each level of the stack: x86 and perhaps ARM for server processors, Linux and Windows for operating systems, Ethernet and InfiniBand for networking, and so forth. This standardization, in concert with other technology trends such as virtualization, makes it possible to create large and highly automated pools of computing that can scale up and down with traffic, can be re-provisioned for new purposes rapidly, can route around failures of many types, and provide streamlined self-service access for users. Open source has been a further important catalyst. Without open source, it’s difficult to imagine that infrastructures on the scale of those at Google and Amazon would be possible.

The flip side of cloud computing is mobility. If cloud computing is the evolved data center, mobility is the client. Perhaps the most obvious shift here is away from “fat client” PC dominance and towards simpler client devices like tablets and smartphones connecting through wireless networks using Web browsers and lightweight app store applications. This shift is increasingly changing how organizations think about providing their employees with computers, a shift that often goes by the “Bring Your Own Device” phrase.

However, there’s much more to the broad mobility trend than just tablets and smartphones. The “Internet of Things,” a term attributed to RFID pioneer Kevin Ashton, posits a world of ubiquitous sensors that can be used to make large systems, such as the electric grid or a city, “smarter.” Which is to say, able to make adjustments for efficiency or other reasons in response to changes in the environment. While this concept has long had a certain just-over-the-horizon futurist aspect, more and more devices are getting plugged into the Internet, even if the changes are sufficiently gradual that the effects aren’t immediately obvious.

Mobility is also behind many of the changes in how applications are being developed — although, especially within enterprises, there’s a huge inertia to both existing software and its associated development and maintenance processes. That said, the consumer Web has created pervasive new expectations for software ease-of-use and interactivity just as public cloud services such as Amazon Web Services have created expectations of how much computing should cost. The Consumerization of Everything means smaller and more modular applications that can be more quickly developed, greater reliance on standard hosted software, and a gradual shift towards languages and frameworks supporting this type of application use and development. It’s also leading to greater integration between development and IT operations, a change embodied in the “DevOps” term.

The third trend is big data. It’s intimately related to the other two. Endpoint devices like smartphones and sensors create massive amounts of data. Large compute farms bring the processing power needed to make that data useful.

Gaining practical insights from the Internet’s data flood is still in its infancy. Although some analysis tools such as MapReduce are well-established, even access to extremely large data sets is no guarantee that the results of the analysis will actually be useful. Even when the objective can be precisely defined in advance — say, improve movie recommendations — the best results often come from incrementally iterating and combining a variety of different approaches.

Big data is also leading to architectural changes in the way data is stored. NoSQL, a term which refers to a variety of caching and database technologies that complement (but don’t typically replace) traditional relational database technologies, is a hot topic because it suggests approaches to dealing with very high data volumes. (Essentially, NoSQL technologies relax one or more constraints in exchange for greater throughput or other advantage. For example, when you read data, what you get back may not be the latest thing that was written.) NoSQL is interesting because so much of big data is about reading and approximations — not absolute transactional integrity as with a stock purchase or sale transaction.

All this data is also physically stored differently. Just as high-value transactions are processed so as to minimize failure or mistakes, so too is the associated data stored on arrays of disks using high-end parts and connected using specialized networks. But these come at a high cost and, anyway, they’re not really designed to scale out to very large-scale distributed computing architectures. Thus, big data is increasingly about scale-out software-based storage that spreads out along with the servers processing the data. We are effectively circling back to a past when disks were all directly attached to computer systems — rather than sitting in centralized storage appliances. (Of course, the scale of both computing and storage is far, far greater than in those past times.)

Computing is always evolving of course. However, what makes today particularly interesting is that we seem to be in the midst of convergent trends of a certain momentum and maturity to reinforce each other in significant ways. That’s what is happening with cloud computing, mobility, and big data.

[cnet]

Apr
23

Torvalds receives 2012 Millennium Technology Prize

Author admin    Category IT News     Tags

Linux Creator Linus Torvalds has been chosen as one of two recipients of the Millennium Technology Prize.

Torvalds was named a 2012 laureate by the Technology Academy of Finland for his creation and ongoing contributions to the open-source operating system. Created in 2004, the awards recognize “technological innovation that significantly improves the quality of human life, today and in the future” every two years.

“[Linux] has become the basis of Android smartphones, tablets, digital television recorders and supercomputers the world over,” the academy said in announcing Torvalds’ selection. “Today millions of people are using devices with Linux at their core that make their work and social lives so much easier and more pleasurable.”

Along with Dr. Shinya Yamanaka of Japan, who is being recognized for his work in stem cell research, Torvalds will receive the award at a June 13 ceremony in Helsinki, Finland.

[cnet]

Apr
20

Fusion-io SDK gives developers native memory access, keys to the NAND realm

Author admin    Category IT News     Tags

Fusion-io SDK gives developers native memory access, keys to the NAND realm

Thought your SATA SSD chugged along real nice? Think again. Fusion-io has just released an SDK that will allow developers to bypass all the speed draining bottlenecks that rob NAND memory of its true potential (i.e. the kernel block I/O layer,) and tap directly into the memory itself. In fact, Fusion-io is so confident of its products abilities, it prefers to call them ioMemory Application Accelerators, rather than SSDs. The SDK allows developers native access to the ioMemory, meaning applications can benefit from the kind of hardware integration you might get from a proprietary platform. The principle has already been demonstrated earlier this year, when Fusion-io delivered one billion IOPS using this native access. The libraries and APIs are available now to registered members of its developer program, hit the more coverage link to sign up.

[engadget]

Apr
19

Yahoo to Shut Down 50 More Products, to Focus on Core Assets

Author admin    Category IT News, Yahoo     Tags

Yahoo’s new boss Scott Thomson may only be a few months into his job, but he’s made an impact, not necessarily a good one. Yet another reorganization is underway, thousands of people have been laid off and more will be in the future.

Yahoo recently sued Facebook for patent infringement. And now the CEO talks about shutting down some 50 Yahoo properties, this after the previous CEO, Carol Bartz shut down tens of them already.

Thomson made no indication on what those properties were, but he did say that the future of Yahoo is focused on the properties that work, its core assets such as Mail, Finance, Sports and so on.

Mostly, he aims to cut down the things that don’t make money and he warned that there may be a small dip in revenue as a result, but a much bigger increase in profits.

[softpedia]

Follow us on Twitter! Follow us on Twitter!
[Powered by Android]

Blogroll

Google Search :)

Calendar

April 2012
M T W T F S S
« Mar   May »
 1
2345678
9101112131415
16171819202122
23242526272829
30  

Archives

Recent Posts