Amazon launches Elastic File System, Machine Learning offerings

Amazon today made a number of key AWS announcements at its AWS Summit in San Francisco. These include the launch of two new AWS services.

Amazon Elastic File System

Arguably the biggest new service Amazon launched today is Amazon Elastic File System (EFS), a managed shared file storage service for EC2 instances. Designed for a wide range of workloads, EFS has the ability to scale to petabyte-size workloads, making it ideal for big data applications, content repositories and home directories.

EFS supports Network File System version 4 (NFSv4) and is mounted to EC2 instances like a standard file system. Unlike Amazon Elastic Block Storage (EBS), EFS file systems can be mounted to multiple EC2 instances at the same time, even if they are located in different Availability Zones within the same region. In fact, in order to ensure durability and availability, EFS data is automatically stored across multiple Availability Zones.

EFS file systems can be scaled up or down seamlessly and EFS users do not need to provision a certain amount of space when they create the file system. Customers pay 30 cents per GB for the actual storage used. EFS uses SSD disks and throughput and IOPS scale as the size of the file system increases. Amazon says that EFS can support thousands of concurrent NFS connections and is appropriate for workloads that require low latencies.

For security, Amazon EFS is integrated with AWS Identity and Access Management and Amazon VPC security groups. EFS file systems can be administrated using an API, the Amazon AWS management console or the AWS CLI.

EFS will first be available in preview in the Oregon AWS region and customers can request access to the preview.

Amazon Machine Learning

A number of Amazon AWS services, including Redshift, S3 and EC2 are widely used by AWS customers to store and analyze data. But a growing number are also using machine learning to make predictions about the future. For instance, Netflix uses machine learning to make movie recommendations to its users. 

Organizations like Netflix have the technical resources and expertise necessary to put machine learning to use, but there are often significant challenges. Machine learning requires knowledge of statistics, models and algorithms, as well as data validation and transformation. And employing machine learning at scale in production can be difficult.

In an effort to put the power of machine learning into the hands of all AWS customers, Amazon today announced Amazon Machine Learning, a solution that makes it easy for users to pull in their data from Amazon data stores like S3 and RDS, create, visualize and refine models using online wizards, and deploy those models into production quickly.

Based on the same technology its internal data scientists have been using for years, Amazon says that its Machine Learning solution is capable of producing billions of predictions a day, and makes it easy for users to obtain predictions from their models using a real-time predictions API or a batch predictions API. To help users get started, Amazon Machine Learning bundles common machine learning algorithms and data transformations out-of-the-box.

Users pay for the compute hours required to analyze data and build models, as well as for the predictions they product. Batch predictions cost 10 cents per 1,000, and real-time predictions are $0.0001 per prediction.

Be sure to read the next Infrastructure-as-a-Service article: OpenStack's Kilo Release Powered by APIs

 

Comments (0)