Jump to content



Photo

Google Updates Cloud Storage: Faster Uploads, Auto-Delete...

google

  • Please log in to reply
No replies to this topic

#1 +techbeck

techbeck

    It's not that I am lazy, it's that I just don't care

  • 19,638 posts
  • Joined: 20-January 05

Posted 22 July 2013 - 16:42

Google today announced three new features for its Cloud Storage service that brings it closer to feature parity with Amazon’s Web Services (AWS). Just like AWS’s S3, Google’s Cloud Storage now offers Object Lifecycle Management to define when an object should be deleted and lets developers choose in which region their files will be stored to reduce latency between their storage and Compute Engine instances.

 

Google still considers both of these features to be experimental, so the usual Google Cloud Storage SLA doesn’t currently apply.

 

As Google notes, having your Durable Reduced Availability Cloud Storage buckets and Compute Engine instances in the same region means they will share the same “network fabric.” This should reduce latency and increase bandwidth for applications that are very data-intensive. Google offers developers a number of U.S.-based “regions” to choose from (East 1-3, Central 1 and 2, West 1).

 

However, Google says users can still also just specify if they want their data to be hosted in the U.S. or EU in general and have their data spread over multiple regions. This may be a better fit if your application is about content distribution and less about computation, the company says.

 

With Object Lifecycle Management, Google also now offers a feature that has long been available to AWS users. With this, developers can now set expiration rules for their files to decide when they should be automatically deleted. Just like AWS, Google uses a basic XML document to manage these rules, and the overall feature set also seems to mirror Amazon’s service.

 

Developers on Google’s cloud platform will now also be able to upload their files faster thanks to Gsutil 3.4, which now uploads large objects in parallel. This update automatically uploads larger files over multiple connections to increase TCP throughput. It’s automatically enabled and developers won’t have to change their workflow. And if you have too much data and even parallel uploads are too slow, remember you can always ship your hard drives to Google, too.

 

http://techcrunch.co...gional-buckets/