Monday 7 December 2020

APACHE SPARK INTERVIEW QUESTIONS AND ANSWERS 2020

 



APACHE SPARK INTERVIEW QUESTIONS AND ANSWERS 2020


Top Apache Spark Interview Questions and Answers

Apache Spark is an open-source distributed general-purpose cluster computing framework. The following gives an interface for programming the complete cluster with the help of absolute information parallelism as well as fault tolerance. The Apache Spark has its architectural groundwork in RDD or Resilient Distributed Dataset. The Resilient Distributed Dataset is a read-only multiset of information that is distributed over a set of machines or is maintained in a fault-tolerant method. The following API was introduced as a distraction on the top of the Resilient Distributed Dataset. This was followed by the Dataset API.

In Apache Spark 1.x, the Resilient Distributed Dataset was the primary API. Some changes were made in the Spark 2.x. the technology of Resilient Distributed Dataset still underlies the Dataset Application Programming Interface. There are a lot of Apache Spark Interview Questions which the candidates have to be prepared for. This is because answering those Apache Spark Interview Questions will give the candidates job in any organization. This is the reason why individuals are required to know all kinds of Apache Spark Interview Questions. Listed below are some of the interview questions for the candidates to prepare for their interview.

 

Basic Apache Spark Interview Questions 

 

  1. What do you understand by Apache Spark?

Apache Spark is a cluster computing framework that operates on a set of commodity hardware as well as performs unification of data which means and writing and reading of numerous data that to from multiple sources. In Spark, a task is a work that can either be a reduced task or a map task. The context of Spark takes care of the implementation of the job which also provides APIs in a variety of languages. The languages are Scala, Python, and Java. These are used for the modification of applications and faster implementation as compared to MapReduce.

 

  1. How can you differentiate Spark and MapReduce? Which one is faster among Spark and MapReduce?

There is a difference between Spark and MapReduce. In MapReduce, the intermediate information will be stored in the HDFS. This takes a lot of time for the user to access the information from a source. But, no such thing happens in the case of spark. In the case of Spark, the user can easily access the information from the source, and that at a faster rate.

We can say that Spark is faster as compared to MapReduce. There are certain reasons which justify why Spark is faster than MapReduce. The reasons are:

  • The light offering doesn’t take place in the case of Spark due to which there is no compulsory rule that reduces would come after the map.
  • Spark operates at a faster speed because it keeps the information in memory as much as possible.

 

  1. Say how much you know about the architecture of Apache Spark. How can you run the applications of Apache Spark?

The Apache Spark application is generally composed of two programs which are the Workers program and the Driver program. The function of these two programs varies from each other. There lies a cluster manager in between the two programs whose work is to interact with two cluster nodes. The contact of Spark Content and Worker Nodes can be maintained with the help of the cluster manager. The Spark Context leads whereas the workers of the Spark follow the Spark context.

The workers contain executors to operate the job. The Spark Context has the capability to handle any kind of dependencies or arguments which are to be passed. Here, the work of Resilient Distributed Datasets is to reside on the Spark executors. The users can also operate the applications of spark locally by making the use of thread. If the user desires to utilize the benefits of distributed environments, he/she can take the help of HDFS, S3 or any other storage systems.

 

  1. How can you define RDD?

RDD stands for Resilient Distributed Datasets. RDD helps the user to distribute the data across all the nodes. If the user carries a huge amount of data and if is not essential to store the data in a single system, the user can spread the information across all the nodes. The partition or division can be called as a subset of data which will needs to be processed by a particular task. The Resilient Distributed Datasets or RDDs are extremely close to input splits in MapReduce.

 

  1. What is the work of coalesce and repartition in MapReduce?

The common thing between Coalesce and repartition is that they both are used for the development of a number of divisions or partitions in a Resilient Distributed Dataset. The only difference between both in the same field is that Coalesce prevents full shuffle. If the user moves from 1000 partitions to 100 partitions, a shuffle would not take place. Now the 100 newly formed partitions will claim 10 of the present partitions without the need of another shuffle.

A coalesce is formed between shuffle and repartition. The repartition would further result in a particular number of partitions with the data which is distributed by using a harsh practitioner.

 

  1. How can the number of partitions be specified while creating a Resilient Distributed Datasets? What are their functions?

The user can specify the number of partitions while creating Resilient Distributed Dataset either by making use of sc.textFile or by making use of parallelize functions like the following:

Val rdd = sc.parallelize(data,4)

Val data = sc.textFile(“path,4”)

 

Intermediate Apache Spark Interview Questions 

 

  1. What are transformations and actions?

Transformations are used to create new Resilient Distributed Datasets from existing Resilient Distributed Datasets. The transformations don’t take place automatically. The user has to call the action for the transformations to take place. If the user doesn’t call action, the transformations won’t be implemented. This can be understood in a better way with an example.

For example: map(), filter(), flatMap (), etc.

Actions will return results from Resilient Distributed Dataset.

For example: reduce(), count(), collect(), etc.

 

  1. What do you understand by Lazy Evaluation?

If the user creates any Resilient Distributed Dataset from another existing Resilient Distributed Dataset, the following is known as transformation. The transformation cannot be implemented if the user doesn’t call for an action. This is because the Apache Spark will delay the outcome until the user wants it to be correct. In some situations, the user even types something and it goes wrong which he has to correct repeatedly. If the user corrects the same in an interactive way, the following increases the time and gives rise to unnecessary delays. The following is known as Lazy Implementation. The Apache Spark also optimizes the required evaluations and takes decisions which are not possible with the line by line code implementation. The recovery of Spark takes place from failures and slow workers.

 

  1. Mention some Actions and Transformations

In this question, the candidates are required to mention some actions and transformations in-front of the interviewer.

Some Actions are: reduce(), count(), collect()

Some Transformations are: map(), filter(), flatMap.

 

Advanced Apache Spark Interview Questions 

 

  1. What role does the cache() and persist() play?

When the candidate desires to store a Resilient Distributed Dataset into memory in such a way that the Resilient Distributed Dataset comes into use numerous times, then the user can take the help of a Cache or Persist. The following can also be helpful when the Resilient Distributed Dataset might have created after a lot of efforts along with complex processing in such situations. The actions such as cache() and persist() are used when the user can be made an RDD or Resilient Distributed Dataset for the purpose of persisting the following. The cache() action is like persist only. The only difference in both is that the user has the ability to store things only in memory.

When these actions are used for the first time, the following are computed in an action. The following is then kept in memory in the nodes. When the user makes use of the persist action, he can specify whether he desires to store the Resilient Distributed Dataset on the disk or in the memory. The user can also choose to store the following in both disk and memory. If the user decides to put the following in memory, he has to further specify whether the following would be stored in de-serialized format or serialized format. The user can also define all those things.

 

  1. How can you define Accumulators?

Accumulators are the write only variables which are initialized only once. The following are then sent to the workers. The work of the workers is to update the following on the basis of the logic that is written. Then, they have to send back the data to the driver whose work is to aggregate or process the data on the basis of the logic. The value of the accumulator can only be accessed by the driver. In simple words, only the driver has the power to access the value of the accumulator. Accumulators are “write only” for the tasks. The work of accumulators is to evaluate the number of errors that are seen in Resilient Distributed Dataset across workers.

 

  1. How can Broadcast Variables be defined?

The Broadcast Variables are the read-only shared variables. It is easier to understand the following with an example. For instance, there is a cluster of data which may have to used numerous times within the workers that to in different phases. The significance of Broadcast variables is that it allows the users to share all the variables to the workers from the driver which would allow every machine to read them. The Broadcast Variables are quite significant in these fields.

 

  1. What kind of optimizations can be made by a developer while operating with spark?

As we know Apache Spark is memory intensive, the following does all the things done by the user in the memory. In the following, the user can adjust the time taken by the Apache Spark to wait until the following runs out of time on each of the phases of data locality (data local -> process local -> node local -> rack local -> Any).

Here, the user needs to filter out the data as soon as possible. The user has to choose the data from numerous storage levels for caching. The user also needs to tune the number of partitions in the spark.

 

  1. How can you define Spark SQL?

Spark SQL is a Spark module for processing of data in a structured manner. The following module is not like the basic Spark Resilient Distributed Dataset API. This is because the interfaces provided by Spark SQL give more information about the structure of the data and the computation being built. Spark SQL can perform extra optimizations by making the use of the following information. The following gives a programming abstraction which is known as DataFrames and the following can also play the role of a distributed SQL query engine. The work of Spark SQL is to implement undeveloped Hadoop Hive queries to run with an extremely high speed on existing implementations and information.

 

  1. What do you know about Data Frame?

Data Frame is a 2D labeled data structure that carries columns of different varieties. The following is something like a SQL table or spreadsheet. We can call the Data Table as a glossary of Series objects. The user can use the Data Frames to store data tables. The following is a list of vectors of equal length. The list of equal length vectors make a two-dimensional structure which allows the Data Frame to share features of both the list and the matrix. The Data Frame is not the same as Data Table. The functions of Data Table and Data Frame also differ from each other.

 

  1. How can you build a data frame?

It is not difficult to create a data frame. To create a data frame from the glossary of the list, the user has to make sure that all the lists are of the same length. If the index is passed, then the user has to ensure that the length of the lists and the length of the index are the same. On the other hand, if no index is passed, then automatically; the index will turn out to be the range (n). Here, n is the length of the list. This is one of the easiest methods of creating a Data Frame. There are many methods of creating a data frame. The user has to choose which one is convenient for him.

 

  1. How can you connect Hive to Spark SQL?

It is extremely easy for the user to connect Hive to Spark SQL. There are certain steps that the user needs to follow to connect Hive to Spark SQL. The following steps are:

  • The user has to move hive-site.xml to $SPARK_HOME/conf/hive-site.xml from $HIVE_HOME/conf/hive-site.xml. The user also has to make an entry regarding hive megastore uris in the same file.
  • Then the user has to extract all the dependencies for the Spark Components that are required.
  • Then the user is required to begin all the Hadoop processes in the set. It is also necessary for the user to verify the following thoroughly.
  • The user has to then begin the MySQL because the Hive requires the following to connect to the metastore. The MySQL also needs to be started because Spark SQL will also require the following after getting connected to Hive.
  • Lastly, the user is required to operate the Hive metastore process because the following will be able to connect to metastore Uris while the Spark SQL operates. The following will then take the hive-site.xml file from it.

 

  1. How can you define GraphX?

Mostly, all the candidates in the following face the issue of processing the data in the form of graphs. This has to be done when the analysis of a data is required. The GraphX has a lot of significance in those situations. The GraphX tries to perform graph computations in Spark which contains information in files or in Resilient Distributed Datasets. The following is built on the top of Spark Core. This feature of the following provides it the abilities of Apache Spark such as fault tolerance and scaling. The following also features a lot of inbuilt graph algorithms.

The work of GraphX is also to unify ETL, iterative graph computation and exploratory analysis in a single system. The users can view the same data in many forms like graphs, collections, transform and join graphs, etc with Resilient Distributed Dataset in an effective way. The following also carries the capability to write custom iterative algorithms by making the use of pregel API. The GraphX competes with the performance of the rapidly running graph systems while retaining the flexibility of Spark along with the ease of use and false tolerance.

 

  1. What do you understand by PageRank algorithm?

PageRank algorithm is one of the algorithms of GraphX. The work of the PageRank algorithm is to evaluate the significance of every single vertex in a graph. The following is done by assuming a random edge from u to v which further represents the endorsements of the significance of v by u. The following can be understood in a better way with an example. For instance, if a person is using Twitter and a lot of other users follow him over Twitter, then the same user will be given a high rank. The same goes for GraphX. The GraphX comes with dynamic and static deployments of PageRank algorithms as the processes on the PageRank object.

 

  1. What do you know about Spark Streaming?

Spark Streaming is actually the API for the stream processing of the live data. The Spark Streaming comes in use whenever the data flows continuously and the user needs to process the data as quickly as possible. In that situation, the user can blindly go for Spark Streaming. The following API has a lot of significance. If we look forward to define the following, we can define it by saying that Spark Streaming is just the extension of the main Spark API which allows the scalable and fault-tolerant stream processing of the live data. The following also provides a high level of abstraction which is known as discretized stream or DStream.

The following is responsible for the representation of continuous streaming of data. The data can flow for Flume, Kafka, TCP Sockets, kinesis, and many more. The user is allowed to do the complex processing of data before they are pushed into their designations. The designations can either be databases or file systems or any other dashboards.

 

  1. What is Sliding Window?

It is very essential to specify the batch interval in the Spark Streaming. For instance, if the batch interval of the user is 1- seconds. Now the Spark will process data whatever it gets in those 10 seconds only. These 10 seconds will be called a last batch interval time. The user can specify the number of last batches that have to be processed with the help of a sliding window. The screenshot of the following will then be captured. In the screenshot, the user can check that it is possible to specify the batch interval along with the number of batches that are to be processed.

Keeping this aside, the user can also specify when he desires to process his last sliding window. For instance, the user can process the remaining 3 batches when he has 2 new batches. The following allows the user to have a choice of when he wishes to slide along with the number of batches that are to be processed in that window.

 

  1. Do you have any questions for us?

This is one of the most essential questions which the candidates have to face in an interview. This is one of the important and trickiest questions which they have to come across in an interview. This is because the following question decides the future of the candidates in the company. The candidates have to be prepared for this question before going for the interview. If the candidates reply by saying that all their doubts were cleared or they don’t have any questions, it would create a bad impression of the individuals on the interviewer. The following question explores the desires of the candidates to know more.

The candidates are recommended to prepare a set of questions to ask the interviewers. It is better if they properly collect ideas about the organization and prepare questions related to that. By this, they can also enhance their knowledge regarding the company. This is because it is very essential for the candidates to know about the workplace before they start working there.

To explore certification programs in your field, chat with our experts, and find the certification that fits your career requirements.

Continue Reading


No comments:

Post a Comment