Chat with us, powered by LiveChat Discussion forum reply needed | Writedemy

Discussion forum reply needed

Discussion forum reply needed

Part 1: 180 words, critical response to the follow discussion forum topic. APA formatting with reference

Initial posting: What are the two core components of Hadoop?

There are basically 3 important core components of hadoop;

1. MapReduce – A software programming model for processing large sets of data in parallel

2. HDFS – The Java-based distributed file system that can store all kinds of data without prior organization.

3. YARN – A resource management framework for scheduling and handling resource requests from distributed applications.

For computational processing i.e. MapReduce: MapReduce is the data processing layer of Hadoop. It is a software framework for easily writing applications that process the vast amount of structured and unstructured data stored in the Hadoop Distributed Filesystem (HSDF). It processes huge amount of data in parallel by dividing the job (submitted job) into a set of independent tasks (sub-job).

In Hadoop, MapReduce works by breaking the processing into phases: Map and Reduce. The Map is the first phase of processing, where we specify all the complex logic/business rules/costly code. Reduce is the second phase of processing, where we specify light-weight processing like aggregation/summation.

For storage purpose i.e.HDFS :Acronym of Hadoop Distributed File System – which is basic motive of storage. It also works as the Master-Slave pattern. In HDFS NameNode acts as a master which stores the metadata of data node and Data node acts as a slave which stores the actual data in local disc parallel.

Yarn : which is used for resource allocation. YARN is the processing framework in Hadoop, which provides Resource management, and it allows multiple data processing engines such as real-time streaming, data science and batch processing to handle data stored on a single platform.

Part 2: 180 words, critical response to the follow discussion forum topic. APA formatting with reference

What are the Hadoop ecosystems and what kinds of ecosystems exist?

 

The Hadoop ecosystem is a very vast set of software bundles that are categorized as belonging to a distributed filesystem ecosystem or a distribute programming ecosystem that can interact with each other and other non-Hadoop software bundle ecosystems as well (Roman, n.d.). I will not list all of the software bundles in this website but just enough to give you an idea of what types of software bundles makes up the Hadoop ecosystem

Distributed Filesystems:

· Apache HDFS (Hadoop Distributed File System) stores large complex files across clusters, often ran with other programs such as Zookeeper, YARN, Weave, etc.

· Red Hat GlusterFS is described as a Red Hat Hadoop alternative for network servers.

· Quantcast File System (QFS) works with large-scale batch processing and MapReduce loads. Considered an alternative to Apache Hadoop HDFS. This DFS uses striping instead of full multiple replication to save storage capacity.

· Ceph File system works well with large amounts of object, block, or file storage much like Hadoop.

· Lustre File System is for distributed files systems that need high performance and availability over large networks through SCSI protocol. Hadoop 2.5 supports Lustre.

Distributed Programming:

· Apache Ignite is distributed computing of large-scale data for a wide variety of data types to include key-value, some SQL, map-reduce, etc.

· Apache MapReduce processes large data sets in parallel distributed clusters, with YARN as the resource manager.

· Apache Pig executes data in parallel to Hadoop, using Hadoop HDFS and MapReduce. The main concern of Apache Pig is data flow and uses its own language called Pig Latin.

· JAQL supports, JSON documents, XML, CSV data, SQL data.

NoSQL Databases:

· Apache HBase is derived from Google Big Table, used as the database for Hadoop. Column-orientated works well with MapReduce.

· Apache Cassandra is also derived from Google Big Table and Google File System can run with or without a HDFS. Also has some of he features of Facebook’s Dynamo.

SQL-on-Hadoop:

· Apache Hive can provide SQL like language but it is not SQL92 compliant. Uses HiveQL for data summarization, query, and analysis.

Our website has a team of professional writers who can help you write any of your homework. They will write your papers from scratch. We also have a team of editors just to make sure all papers are of HIGH QUALITY & PLAGIARISM FREE. To make an Order you only need to click Ask A Question and we will direct you to our Order Page at WriteDemy. Then fill Our Order Form with all your assignment instructions. Select your deadline and pay for your paper. You will get it few hours before your set deadline.

Fill in all the assignment paper details that are required in the order form with the standard information being the page count, deadline, academic level and type of paper. It is advisable to have this information at hand so that you can quickly fill in the necessary information needed in the form for the essay writer to be immediately assigned to your writing project. Make payment for the custom essay order to enable us to assign a suitable writer to your order. Payments are made through Paypal on a secured billing page. Finally, sit back and relax.

Do you need an answer to this or any other questions?

About Writedemy

We are a professional paper writing website. If you have searched a question and bumped into our website just know you are in the right place to get help in your coursework. We offer HIGH QUALITY & PLAGIARISM FREE Papers.

How It Works

To make an Order you only need to click on “Order Now” and we will direct you to our Order Page. Fill Our Order Form with all your assignment instructions. Select your deadline and pay for your paper. You will get it few hours before your set deadline.

Are there Discounts?

All new clients are eligible for 20% off in their first Order. Our payment method is safe and secure.

Hire a tutor today CLICK HERE to make your first order