{"id":67033,"date":"2018-08-08T07:06:09","date_gmt":"2018-08-08T07:06:09","guid":{"rendered":"https:\/\/www.whizlabs.com\/blog\/?p=67033"},"modified":"2018-08-08T07:06:09","modified_gmt":"2018-08-08T07:06:09","slug":"hdfs-interview-questions","status":"publish","type":"post","link":"https:\/\/www.whizlabs.com\/blog\/hdfs-interview-questions\/","title":{"rendered":"HDFS Interview Questions and Answers"},"content":{"rendered":"<p class=\"p1\" style=\"text-align: justify;\"><span class=\"s1\">A Hadoop interview examines a candidate from different angles from the big data perspective. When you appear for a Hadoop interview, be prepared to face questions on all of its ecosystem components and HDFS is no exception. As HDFS is one of the key components of Hadoop, hence, HDFS interview questions take an important part of any Hadoop interview.<\/span><\/p>\n<blockquote>\n<p style=\"text-align: justify;\">Preparing to become a certified Hadoop professional? Start preparation and get hands-on with our online courses for\u00a0<a href=\"https:\/\/www.whizlabs.com\/hdpca-certification\/\" target=\"_blank\" rel=\"noopener\">Hortonworks Certification<\/a> and <a href=\"https:\/\/www.whizlabs.com\/cloudera-cca-admin-certification\/\" target=\"_blank\" rel=\"noopener\">Cloudera Certification<\/a>.<\/p>\n<\/blockquote>\n<p class=\"p1\" style=\"text-align: justify;\"><span class=\"s1\">In this blog, we will discuss on some of the important and <\/span><span class=\"s2\">top HDFS Interview Questions and Answers. Moreover, these <a href=\"https:\/\/www.whizlabs.com\/blog\/top-50-hadoop-interview-questions\/\" target=\"_blank\" rel=\"noopener\">Hadoop Interview Questions<\/a> on HDFS will highlight the core areas of HDFS to focus on.<\/span><\/p>\n<h2 class=\"p2\" style=\"text-align: justify;\">Most Common HDFS Interview Questions and Answers<\/h2>\n<p class=\"p1\" style=\"text-align: justify;\"><strong><span class=\"s1\">1. What is HDFS?<\/span><\/strong><\/p>\n<p class=\"p1\" style=\"text-align: justify;\"><span class=\"s1\"><strong>Answer:<\/strong> HDFS stands for Hadoop Distributed File System that stores large datasets in Hadoop. It runs on commodity hardware and is highly fault tolerant. HDFS follows Master\/Slave architecture where a number of machines run on a cluster. The cluster comprises of a Namenode and multiple slave nodes known as DataNodes in the cluster.<\/span><\/p>\n<p><img decoding=\"async\" class=\"alignnone wp-image-67034 size-full\" src=\"https:\/\/www.whizlabs.com\/blog\/wp-content\/uploads\/sites\/2\/2018\/08\/hdfs-architecture.jpg\" alt=\"HDFS Architecture\" width=\"934\" height=\"665\" srcset=\"https:\/\/www.whizlabs.com\/blog\/wp-content\/uploads\/2018\/08\/hdfs-architecture.jpg 934w, https:\/\/www.whizlabs.com\/blog\/wp-content\/uploads\/2018\/08\/hdfs-architecture-300x214.jpg 300w, https:\/\/www.whizlabs.com\/blog\/wp-content\/uploads\/2018\/08\/hdfs-architecture-768x547.jpg 768w, https:\/\/www.whizlabs.com\/blog\/wp-content\/uploads\/2018\/08\/hdfs-architecture-590x420.jpg 590w, https:\/\/www.whizlabs.com\/blog\/wp-content\/uploads\/2018\/08\/hdfs-architecture-640x456.jpg 640w, https:\/\/www.whizlabs.com\/blog\/wp-content\/uploads\/2018\/08\/hdfs-architecture-681x485.jpg 681w\" sizes=\"(max-width: 934px) 100vw, 934px\" \/><\/p>\n<p class=\"p1\" style=\"text-align: justify;\"><span class=\"s1\">The Namenode stores meta-data, i.e., the number of Data Blocks, their replicas, locations, and other details. On the other hand, Data Node stores the actual data and performs read\/write requests as per client\u2019s request.<\/span><\/p>\n<p class=\"p1\" style=\"text-align: justify;\"><strong><span class=\"s1\">2. What are the different components of HDFS?<\/span><\/strong><\/p>\n<p class=\"p1\" style=\"text-align: justify;\"><span class=\"s1\"><strong>Answer:<\/strong> HDFS has three components:<\/span><\/p>\n<ul class=\"ul1\" style=\"text-align: justify;\">\n<li class=\"li1\"><span class=\"s1\">Namenode<\/span><\/li>\n<li class=\"li1\"><span class=\"s1\">DataNode<\/span><\/li>\n<li class=\"li1\"><span class=\"s1\">Secondary Namenode<\/span><\/li>\n<\/ul>\n<p><strong>3. What is the default block size of DataBlock in HDFS DataNode?<\/strong><\/p>\n<p class=\"p1\" style=\"text-align: justify;\"><span class=\"s1\"><strong>Answer:<\/strong> The default block size of DataBlock in Hadoop 1.x is 64MB, and in Hadoop 2.x it is 128MB.<\/span><\/p>\n<p class=\"p1\" style=\"text-align: justify;\"><strong>4. Explain the service of NameNode in Hadoop.<\/strong><\/p>\n<p class=\"p1\" style=\"text-align: justify;\"><span class=\"s1\"><strong>Answer:<\/strong> NameNode plays the role of Master node in HDFS. <\/span><span class=\"s6\">It contains two vital information:<\/span><\/p>\n<ul class=\"ul1\" style=\"text-align: justify;\">\n<li class=\"li8\"><span class=\"s2\">Regarding Hadoop metadata and file system tree <\/span><\/li>\n<li class=\"li8\"><span class=\"s2\">The in-memory mapping between data blocks and data node<\/span><\/li>\n<\/ul>\n<p class=\"p8\" style=\"text-align: justify;\"><span class=\"s2\">NameNode contains the metadata information like file permission, file replication factor, block size, <\/span><span class=\"s1\">file <\/span><span class=\"s2\">creation time, owner information of the file, and the mapping between blocks of the file and the data nodes.<\/span><\/p>\n<p class=\"p8\" style=\"text-align: justify;\"><strong>5<\/strong>. <strong>What is fsimage<\/strong><strong>\u00a0and editlogs in HDFS?<\/strong><\/p>\n<p class=\"p1\" style=\"text-align: justify;\"><span class=\"s1\"><b>Answer: <\/b>The metadata of Hadoop files is stored in a file in HDFS Namenode memory which is known as fsimage.<\/span><\/p>\n<p class=\"p9\" style=\"text-align: justify;\"><span class=\"s1\">When any change is done to the Hadoop filesystem like adding or removing a file etc. it is not immediately written to fsimage rather it is maintained in a separate file on disk which is called editlog. When a name node starts, the editlog is synced with the old fsimage file, and a new copy is updated.<\/span><\/p>\n<p class=\"p9\" style=\"text-align: justify;\"><b>6. The default block size in Unix and Linux is 4KB, then why HDFS block size is set to 64MB or 128MB?<\/b><\/p>\n<p class=\"p1\" style=\"text-align: justify;\"><span class=\"s1\"><b>Answer: <\/b>A<b> <\/b>Block is the smallest unit of data that is stored in a file system. Hence if we consider the default block size of Linux\/Unix for data storing in Hadoop then for a massive set of data (petabytes) it will take a large number of blocks. Consequently, the metadata amount will increase significantly causing performance issue of NameNode. So, in Hadoop 1.x the default block size is 64MB and in Hadoop 2.x it is set to 128MB.<\/span><\/p>\n<blockquote>\n<p style=\"text-align: justify;\">Are you a fresher aspired to make a career in Hadoop? Read our previous blog that will help you to start <a href=\"https:\/\/www.whizlabs.com\/blog\/learning-hadoop-for-beginners\/\" target=\"_blank\" rel=\"noopener\">learning Hadoop for beginners<\/a>.<\/p>\n<\/blockquote>\n<p class=\"p1\" style=\"text-align: justify;\"><b>7. What happens when the NameNode starts?<\/b><\/p>\n<p class=\"p1\" style=\"text-align: justify;\"><span class=\"s1\"><b>Answer:<\/b> When the NameNode starts it performs the following operations:<\/span><\/p>\n<ul class=\"ul1\" style=\"text-align: justify;\">\n<li class=\"li1\"><span class=\"s1\">From last saved FsImage and the editlog file, it loads the file system namespace into its main memory.<\/span><\/li>\n<li class=\"li1\"><span class=\"s1\">Creates the new fsimage file by merging the previous fsimage and editslog file to create new file system namespace.<\/span><\/li>\n<li class=\"li1\"><span class=\"s1\">Receives information about block locations from all DataNodes. <\/span><b><\/b><\/li>\n<\/ul>\n<p><span class=\"s1\"><b>8. What is Safe mode in Hadoop?<\/b><\/span><\/p>\n<p class=\"p1\" style=\"text-align: justify;\"><span class=\"s1\"><b>Answer:<\/b> Safe mode indicates the maintenance state of the NameNode. During the safe mode, the HDFS cluster becomes read-only. Hence, no modification is allowed in the filesystem. Also, you cannot delete or replicate any data block in this mode.<\/span><\/p>\n<p class=\"p1\" style=\"text-align: justify;\"><b>9. If you change the block size in HDFS what happens to the existing data?<\/b><\/p>\n<p class=\"p1\" style=\"text-align: justify;\"><span class=\"s1\"><b>Answer: <\/b>If we change the block size in HDFS it will not affect the existing data.<\/span><\/p>\n<p class=\"p1\" style=\"text-align: justify;\"><b>10. What is HDFS replication? What is the default replication factor?<\/b><\/p>\n<p class=\"p1\" style=\"text-align: justify;\"><span class=\"s1\"><b>Answer: <\/b>HDFS is designed to be fault tolerant to prevent data loss. Hence, HDFS maintains three copies of each DataBlock in different racks and in different data nodes which is known as replication.<\/span><\/p>\n<p class=\"p1\" style=\"text-align: justify;\"><span class=\"s1\">The default replication factor is 3.<\/span><\/p>\n<p class=\"p1\" style=\"text-align: justify;\"><b>11. What is the Secondary NameNode?<\/b><\/p>\n<p class=\"p1\" style=\"text-align: justify;\"><span class=\"s1\"><b>Answer:<\/b><\/span> <span class=\"s1\">Hadoop metadata is stored in NameNode main memory and disk. Mainly two files are used for this purpose \u2013<\/span><\/p>\n<ul class=\"ul1\" style=\"text-align: justify;\">\n<li class=\"li1\"><span class=\"s1\">Editlogs<\/span><\/li>\n<li class=\"li1\"><span class=\"s1\">Fsimage<\/span><\/li>\n<\/ul>\n<p class=\"p1\" style=\"text-align: justify;\"><span class=\"s1\">Any updates done to HDFS are entered in the editlogs. As the number of entries increases the file size grows automatically, however, the file size for the fsimage file remains the same. When the server gets restarted, the contents of the editlogs file are written into the fsimage file which is then loaded into main memory which is time-consuming. The more the editlogs file size, the more time it will take to load into fsimage causing an extended downtime.<\/span><\/p>\n<p class=\"p1\" style=\"text-align: justify;\"><span class=\"s1\">To avoid such prolonged downtime, a helper node for NameNode which is known as Secondary NameNode is used which periodically copies the contents from editlogs to fsimage and copy the new fsimage file back to the NameNode.<\/span><\/p>\n<p class=\"p1\" style=\"text-align: justify;\"><b>12. How does NameNode handle DataNode failure?<\/b><\/p>\n<p class=\"p1\" style=\"text-align: justify;\"><span class=\"s1\"><b>Answer<\/b><\/span><span class=\"s10\"><b>: <\/b><\/span><span class=\"s1\">HDFS architecture is designed in a way that every DataNodes periodically send heartbeat to the NameNode to assure it is in working mode. When the NameNode does not receive any heartbeat from a particular DataNode, it considers that DataNode as dead or non-functional and transfer all the respective DataBlock to some other active DataNode which is already replicated with it.<\/span><\/p>\n<blockquote>\n<p style=\"text-align: justify;\">Preparing a for Hadoop Developer interview? Understand the <a href=\"https:\/\/www.whizlabs.com\/blog\/hadoop-developer-job-responsibilities\/\" target=\"_blank\" rel=\"noopener\">Hadoop Developer Job Responsibilities<\/a> first.<\/p>\n<\/blockquote>\n<h2 class=\"p2\">Advance HDFS Interview Questions<\/h2>\n<p style=\"text-align: justify;\">So, moving forward, here we cover few advance HDFS interview questions along with the frequently asked HDFS interview questions.<\/p>\n<p class=\"p4\" style=\"text-align: justify;\"><b>13. How data\/file read operation is performed in HDFS?<\/b><\/p>\n<p class=\"p1\" style=\"text-align: justify;\"><span class=\"s1\"><b>Answer<\/b><\/span><span class=\"s10\"><b>: <\/b><\/span><span class=\"s1\">HDFS NameNode is the placeholder for all the file information and their actual locations in the slave nodes. The below steps are followed in the read operation of a file:<\/span><\/p>\n<ul class=\"ul1\" style=\"text-align: justify;\">\n<li class=\"li1\"><span class=\"s1\">When a file needs to be read, the file information is retrieved from NameNode by DistributedFileSystem instance.<\/span><\/li>\n<li class=\"li1\"><span class=\"s1\">NameNode checks whether that particular file exists and the user has the access for the file <\/span><\/li>\n<li class=\"li1\"><span class=\"s1\">Once the above-mentioned criteria are met, the NameNode provides the token to the client, for authentication to get the file from DataNode. <\/span><\/li>\n<li class=\"li1\"><span class=\"s1\">NameNode provides the list of all Block detail and related data nodes of the file<\/span><\/li>\n<li class=\"li1\"><span class=\"s1\">DataNodes are then sorted as per their proximity to the client.<\/span><\/li>\n<li class=\"li1\"><span class=\"s1\">DistributedFileSystem returns an input stream to the client called as FSDataInputStream so that client can read data from it. <\/span><\/li>\n<li class=\"li1\"><span class=\"s1\">FSDataInputStream works as a wrapper to the DFSInputStream, which is responsible for managing NameNode and DataNode and I\/O.<\/span><\/li>\n<li class=\"li1\"><span class=\"s1\">As the Client calls read () on the stream, the DFSInputStream connects to the closet DataNode block and data is returned to the client via stream. The read () operation is repeatedly called till the end of the first block is completely read.<\/span><\/li>\n<li class=\"li1\"><span class=\"s1\">Once the first block is completely read, the connection with that DataNode is closed.<\/span><\/li>\n<li class=\"li1\"><span class=\"s1\">Next, the DFSInputStream again connects to the next possible DataNode for the next block, and it continues until the file is completely read.<\/span><\/li>\n<li class=\"li1\"><span class=\"s1\">Once the entire file is read, FSDataInputStream calls the close () operation to close the connection.<\/span><b><\/b><\/li>\n<\/ul>\n<p><span class=\"s1\"><b>14. Is concurrent write into HDFS file possible?<\/b><\/span><\/p>\n<p class=\"p1\" style=\"text-align: justify;\"><span class=\"s1\"><b>Answer<\/b><\/span><span class=\"s10\"><b>: <\/b><\/span><span class=\"s1\">No, HDFS does not allow concurrent writing. Because when one client receives permission by NameNode for writing on data node block, the particular block gets locked till the finish of the write operation. Hence, no other client can write on the same block.<\/span><\/p>\n<p class=\"p1\" style=\"text-align: justify;\"><b>15. What are the challenges in existing HDFS architecture?<\/b><\/p>\n<p class=\"p1\" style=\"text-align: justify;\"><span class=\"s1\"><b>Answer<\/b><\/span><span class=\"s10\"><b>: <\/b><\/span><span class=\"s1\">Existing HDFS architecture consists of only one NameNode which contains the single Namespace and multiple DataNodes that hold the actual data. This architecture works well with limited cluster size. However, if we try to increase the cluster size, we come across few challenges. <\/span><\/p>\n<ul class=\"ul1\" style=\"text-align: justify;\">\n<li class=\"li1\"><span class=\"s1\">As the Namespace and Blocks are tightly coupled, other services cannot easily utilize the storage capacity of Blocks efficiently. <\/span><\/li>\n<li class=\"li1\"><span class=\"s1\">With a single Namenode, if we want to add more DataNodes in the cluster, it will create huge metadata. Here we can scale DataNodes horizontally. However, we cannot scale up Namenode in the same manner. This is a Namespace Scalability issue.<\/span><\/li>\n<li class=\"li1\"><span class=\"s1\">The current HDFS file system has a performance limitation related to the throughput. Because a single name node supports only 60000 concurrent tasks.<\/span><\/li>\n<li class=\"li1\"><span class=\"s1\">We cannot get isolated namespace for a single application as HDFS deployments happen on a multi-tenant environment and multiple applications or organizations share a single cluster.<\/span><\/li>\n<\/ul>\n<p><b><\/b><span class=\"s1\"><b>16. What is HDFS Federation?<\/b><\/span><\/p>\n<p class=\"p1\" style=\"text-align: justify;\"><span class=\"s1\"><b>Answer<\/b><\/span><span class=\"s10\"><b>: <\/b><\/span><span class=\"s1\">In the existing HDFS architecture horizontal scaling up of the Namenode is not possible. Hadoop Federation is the procedure through which several independent NameNodes are horizontally scaled up without any inter coordination.<\/span><\/p>\n<p><img decoding=\"async\" class=\"alignnone wp-image-67035 size-full\" src=\"https:\/\/www.whizlabs.com\/blog\/wp-content\/uploads\/sites\/2\/2018\/08\/hdfs-federation.jpg\" alt=\"HDFS Federation\" width=\"913\" height=\"678\" srcset=\"https:\/\/www.whizlabs.com\/blog\/wp-content\/uploads\/2018\/08\/hdfs-federation.jpg 913w, https:\/\/www.whizlabs.com\/blog\/wp-content\/uploads\/2018\/08\/hdfs-federation-300x223.jpg 300w, https:\/\/www.whizlabs.com\/blog\/wp-content\/uploads\/2018\/08\/hdfs-federation-768x570.jpg 768w, https:\/\/www.whizlabs.com\/blog\/wp-content\/uploads\/2018\/08\/hdfs-federation-566x420.jpg 566w, https:\/\/www.whizlabs.com\/blog\/wp-content\/uploads\/2018\/08\/hdfs-federation-80x60.jpg 80w, https:\/\/www.whizlabs.com\/blog\/wp-content\/uploads\/2018\/08\/hdfs-federation-100x75.jpg 100w, https:\/\/www.whizlabs.com\/blog\/wp-content\/uploads\/2018\/08\/hdfs-federation-180x135.jpg 180w, https:\/\/www.whizlabs.com\/blog\/wp-content\/uploads\/2018\/08\/hdfs-federation-238x178.jpg 238w, https:\/\/www.whizlabs.com\/blog\/wp-content\/uploads\/2018\/08\/hdfs-federation-640x475.jpg 640w, https:\/\/www.whizlabs.com\/blog\/wp-content\/uploads\/2018\/08\/hdfs-federation-681x506.jpg 681w\" sizes=\"(max-width: 913px) 100vw, 913px\" \/><\/p>\n<p class=\"p1\" style=\"text-align: justify;\"><span class=\"s1\">In HDFS federation architecture, DataNodes are present at the bottom layer and works as common storage. Each DataNode registers itself with all the NameNodes in the cluster. Here many NameNodes manage many Namespaces whereas each Namespace has its own Block pool. A Block pool is a set of Blocks and belongs to a single Namespace.<\/span><\/p>\n<p class=\"p9\" style=\"text-align: justify;\"><span class=\"s11\"><b><i>Bottom line, <\/i><\/b>h<\/span><span class=\"s2\">ope the above mentioned HDFS interview questions will help you to prepare for the Hadoop interview. However, HDFS is the most important component of Hadoop, and you should gain the complete understanding of its architecture and configuration to explore it better. Hence, we highly recommend you to build up your knowledge base with industry recognized Hadoop certification courses like Cloudera or HortonWorks. <\/span><\/p>\n<p class=\"p2\" style=\"text-align: justify;\"><span class=\"s1\"><i>Whizlabs offer two Big Data Hadoop certification courses which are highly recognized and appraised in the industry and provide a thorough understanding of Hadoop with theory and hands on. These are \u2013 <\/i><\/span><\/p>\n<p class=\"p13\" style=\"text-align: justify;\"><span class=\"s12\"><a href=\"https:\/\/www.whizlabs.com\/hdpca-certification\/\" target=\"_blank\" rel=\"noopener\"><i>HDP Certified Administrator (HDPCA) Certification<\/i><\/a><\/span><\/p>\n<p class=\"p13\" style=\"text-align: justify;\"><span class=\"s12\"><a href=\"https:\/\/www.whizlabs.com\/cloudera-cca-admin-certification\/\" target=\"_blank\" rel=\"noopener\"><i>Cloudera Certified Associate Administrator (CCA-131) Certification<\/i><\/a><\/span><\/p>\n<p class=\"p14\" style=\"text-align: justify;\"><span class=\"s1\"><i>Join us today and achieve the success for tomorrow as a Hadoop professional!<\/i><\/span><\/p>\n","protected":false},"excerpt":{"rendered":"<p>A Hadoop interview examines a candidate from different angles from the big data perspective. When you appear for a Hadoop interview, be prepared to face questions on all of its ecosystem components and HDFS is no exception. As HDFS is one of the key components of Hadoop, hence, HDFS interview questions take an important part of any Hadoop interview. Preparing to become a certified Hadoop professional? Start preparation and get hands-on with our online courses for\u00a0Hortonworks Certification and Cloudera Certification. In this blog, we will discuss on some of the important and top HDFS Interview Questions and Answers. Moreover, these [&hellip;]<\/p>\n","protected":false},"author":220,"featured_media":67072,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_uag_custom_page_level_css":"","site-sidebar-layout":"default","site-content-layout":"","ast-site-content-layout":"default","site-content-style":"default","site-sidebar-style":"default","ast-global-header-display":"","ast-banner-title-visibility":"","ast-main-header-display":"","ast-hfb-above-header-display":"","ast-hfb-below-header-display":"","ast-hfb-mobile-header-display":"","site-post-title":"","ast-breadcrumbs-content":"","ast-featured-img":"","footer-sml-layout":"","theme-transparent-header-meta":"","adv-header-id-meta":"","stick-header-meta":"","header-above-stick-meta":"","header-main-stick-meta":"","header-below-stick-meta":"","astra-migrate-meta-layouts":"default","ast-page-background-enabled":"default","ast-page-background-meta":{"desktop":{"background-color":"var(--ast-global-color-4)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"tablet":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"mobile":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""}},"ast-content-background-meta":{"desktop":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"tablet":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"mobile":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""}},"footnotes":""},"categories":[6],"tags":[405,850,1555],"class_list":["post-67033","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-big-data","tag-best-hadoop-hdfs-interview-questions","tag-hadoop-interview-questions-on-hdfs","tag-top-hdfs-interview-questions-and-answers"],"uagb_featured_image_src":{"full":["https:\/\/www.whizlabs.com\/blog\/wp-content\/uploads\/2018\/08\/hdfs_interview_questions.png",600,315,false],"thumbnail":["https:\/\/www.whizlabs.com\/blog\/wp-content\/uploads\/2018\/08\/hdfs_interview_questions-150x150.png",150,150,true],"medium":["https:\/\/www.whizlabs.com\/blog\/wp-content\/uploads\/2018\/08\/hdfs_interview_questions-300x158.png",300,158,true],"medium_large":["https:\/\/www.whizlabs.com\/blog\/wp-content\/uploads\/2018\/08\/hdfs_interview_questions.png",600,315,false],"large":["https:\/\/www.whizlabs.com\/blog\/wp-content\/uploads\/2018\/08\/hdfs_interview_questions.png",600,315,false],"1536x1536":["https:\/\/www.whizlabs.com\/blog\/wp-content\/uploads\/2018\/08\/hdfs_interview_questions.png",600,315,false],"2048x2048":["https:\/\/www.whizlabs.com\/blog\/wp-content\/uploads\/2018\/08\/hdfs_interview_questions.png",600,315,false],"profile_24":["https:\/\/www.whizlabs.com\/blog\/wp-content\/uploads\/2018\/08\/hdfs_interview_questions.png",24,13,false],"profile_48":["https:\/\/www.whizlabs.com\/blog\/wp-content\/uploads\/2018\/08\/hdfs_interview_questions.png",48,25,false],"profile_96":["https:\/\/www.whizlabs.com\/blog\/wp-content\/uploads\/2018\/08\/hdfs_interview_questions.png",96,50,false],"profile_150":["https:\/\/www.whizlabs.com\/blog\/wp-content\/uploads\/2018\/08\/hdfs_interview_questions.png",150,79,false],"profile_300":["https:\/\/www.whizlabs.com\/blog\/wp-content\/uploads\/2018\/08\/hdfs_interview_questions.png",300,158,false],"tptn_thumbnail":["https:\/\/www.whizlabs.com\/blog\/wp-content\/uploads\/2018\/08\/hdfs_interview_questions-250x250.png",250,250,true],"web-stories-poster-portrait":["https:\/\/www.whizlabs.com\/blog\/wp-content\/uploads\/2018\/08\/hdfs_interview_questions.png",600,315,false],"web-stories-publisher-logo":["https:\/\/www.whizlabs.com\/blog\/wp-content\/uploads\/2018\/08\/hdfs_interview_questions.png",96,50,false],"web-stories-thumbnail":["https:\/\/www.whizlabs.com\/blog\/wp-content\/uploads\/2018\/08\/hdfs_interview_questions.png",150,79,false]},"uagb_author_info":{"display_name":"Aditi Malhotra","author_link":"https:\/\/www.whizlabs.com\/blog\/author\/aditi\/"},"uagb_comment_info":6,"uagb_excerpt":"A Hadoop interview examines a candidate from different angles from the big data perspective. When you appear for a Hadoop interview, be prepared to face questions on all of its ecosystem components and HDFS is no exception. As HDFS is one of the key components of Hadoop, hence, HDFS interview questions take an important part&hellip;","_links":{"self":[{"href":"https:\/\/www.whizlabs.com\/blog\/wp-json\/wp\/v2\/posts\/67033","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.whizlabs.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.whizlabs.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.whizlabs.com\/blog\/wp-json\/wp\/v2\/users\/220"}],"replies":[{"embeddable":true,"href":"https:\/\/www.whizlabs.com\/blog\/wp-json\/wp\/v2\/comments?post=67033"}],"version-history":[{"count":0,"href":"https:\/\/www.whizlabs.com\/blog\/wp-json\/wp\/v2\/posts\/67033\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.whizlabs.com\/blog\/wp-json\/wp\/v2\/media\/67072"}],"wp:attachment":[{"href":"https:\/\/www.whizlabs.com\/blog\/wp-json\/wp\/v2\/media?parent=67033"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.whizlabs.com\/blog\/wp-json\/wp\/v2\/categories?post=67033"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.whizlabs.com\/blog\/wp-json\/wp\/v2\/tags?post=67033"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}