Monday, September 28, 2015

Agile Release process time lines milestones

Project Review

Release Brnaching 

Hard Lock
Release Freeze

Module Production1

Module production2

Module production3

Support Engineering Roles and Responsibilities if your looking job in this Area

This is XXX, I have around 10 years of inclusion in java and j2EE advancements started with my carrer in XX as a java engineer for b2b application and worked around 3 years then I moved to XXX as application bolster engineer with one of the primary customer as "XXX " in that worked around 6 months then I moved to XXX as an application originator which worked around 3 years sometime later I moved to XXX as application bolster Developer worked around 4 years then joined in XXX company as Release and CD process after deft system to run each one of the designers and continues with transport to keep up any course of action issues we need to get the architect to change the issues if any hunk blocker or centerpieces.

What sort of bolster which you included in XXX and XXX is your bolster engineer .

In the XXX we have a business to client application which has that application in that each client can book the hotles which are identified with the XXX in the globe if the client have any issues mind unwaveringness focuses and bounus they make a ticket and dole out to us to make settle this identifies with the L2 ( line 2 bolster) we need to alter DB fixes if any client uniq id ..so on

I included more into XXX driving a group which bolster whole seaward

XXX have one possessed as b2b application as undertaking having 256 dynamic items alongside a few alternatives 1 ,2 ,3 and 4 , around the 8000 clients are hitting this application and putting in the requests. This application having the 20 frameworks inside imparting while putting in the requests clients can get issues and stuck in the stream in view of that end client calls to SSRC group and they make the tickets .

L1 support as SSRC ( deals bolster asset focus) they need to get the call from end clients and spot the tickets.

L2 bolster : get all the ticket which against the name of out application i.e XXX application make a guideline into the SSRC device and get the warnings to every one of the groups and allot to group and send a rundown of ticket which who is the ticket's proprietor and relegate in view of the usefulness.

We have the item specialists I am master on Product1 and item 2 and every real item. On the off chance that the issue exists I needs to give the determination and close that.

On the off chance that anything inputs requires from client we needs to Route the SSRC L1 group to verify get the information from the client to settle this issues

L3 support : if anything related rehashed issues and code issues we needs to make an imperfection and appoint to Dev group.

L4: support: any utilitarian changes or business requires to changes we needs to appoint to this to center advancement groups.

My parts:

Appoint to group every one of the tickets which in view of item and practically.

Send a notice sends to groups for works.

Altered the same number of issues which related the client SLA's

Making client check rundown to before call the SSRC group.

Appropriate the rundown L1 support on the off chance that they requires client preparing issues arranged one rundown and request that they catch up those things.

Set up the gatherings with clients and clarify essential usefulness issues which are connected client .

Much of the time utilize the DB fixes.

Make the item based client check rundown saved over the groups.

Leading EOM end of the month visits straightforwardly with end clients to close every one of the issues on time.

Send every one of the issues reports in EOM and show into higher administration.

Sending every one of the fixes report and who worked in a week and what number of they close if any bolster architect gives any wrong determination we needs to right and upgrade the determination.

Apache Storm and Kafka in real time Scenarios...Hadoop Eco System

we need to come to know all the terminology here

Topology -- combination of spouts and bolts 
Spouts -- reduce programs same like job tracker process
Nimbus -- daemon which runs in as master like name node 
Bolt --  Map program like task tracker 
Zookeeper -- mediator between all the configurations nimbus and spouts..
Redis -- Key value stores 


for doing the practical programs you need to know the all the related jars and requires some environments 

in..progress...

Agile process in Hadoop real world

Hello there,

Deft procedure progressively :

Month to month 2 leases in the event that you have numerous application relies on upon area and association.

illustration

module 1

module 2

module 1 having 10 application

module 2 having 20 application

module 1 is expect legacy

module 2 is most recent overhauling ones

in one month we need to discharge both module 1 and module 2

prerequisites

investigation

task demand determination for changes

advancements

incline labs for speedier conveyance

testing QA

relapse, piece box, manual ..and so on

execution testing getting surrenders and catch up the dev groups and make it fix

triage bug settling meeting day by day

a few deformities making contrast for future discharge

every last usefulness we have the property if anything turns out badly they can off it on the fly

Discharge Day if smooth they have 2 sub parts Cell An and Cell B

Cell A morning discharge and Cell me in the Evening

each one of those stages are running in parallel will finish every one of the things in with in an edge time.

in Hadoop they can make a container alongside the UI segments and send into the servers nothing particular for some other usefulness.

if you need any doubts please add comment and let me know!!

Friday, September 25, 2015

Real Time Hadoop Architecture

Real Time Hadoop Architecture.

Technology list in hadoop eco system.

Cluster in real time  


Utilizing the beneath you can make it one work stream for getting the logs to appear as measurements


HDFS
MAP Reduce 
Hive 
Hbase 
Solr
Storm
Flume
Kafka
ZooKeeper
Redis 


Solr -- indexing 
kafka -- distributing 
Storm -- processing like map reduce programs for real time events 
Flume cluster -- getting the logs jvm and application logs in real time
Zookeeper -- control all the configurations related jobs 
Hbase -- making as data stored in column oriented way
Hive -- for querying your metrics and display in UI

along with normally people club the CDH(cloudera) model and HDP( Hortonworks)

still in progress....

Looking for job in Hadoop/Bigdata lets follow me!!

Greetings All, 

Lets time to on account of read this present article..let's get to the meaningful part . this web journals valuable for who is searching for employment in Hadoop/BigData Technologies. 

on the off chance that you done your course or your a subject specialists however you have to answers every one of the inquiries from questioner.

for getting job in Hadoop you should know below things

Hadoop Architecture -- This comes from reading and leaning

HDFS - Storing
Map reduce  -- Processing

Now a days no one using the Batch Processing but you should be strong enough the architecture of core part like

NameNode -- Master node


SNN -- master
JOBTracker -- master
Task Tracker -- slave
Data Node  -- slave

NOTE1:

Continuously circumstance you need to speak with just name node machine you don't know different machines and evil spirits if your PIG/HIVE designer .



1) Tell me about yourself ?  
Answer : you much know more than me but make sure be confident.

2) Explain about your company use case ?
Answer :  

3) Explain about your job work flow process?

4) What is your hadoop cluster size?

5) Data retention policy ?

6) what is the size of you each node hardware configuration?

7)  Cluster capacity of Data each node?

8) Per day how much data your handling?

9) What type of data your using in your cluster ? 

10) How to store the data into your hadoop cluster? what is the format ?

11) What is Hadoop how you can define in your way?

12) Which version of hadoop eco are you using ?

13) What is the difference between MR1 and MR2 ? or hadoop1 or hadoop2?

14)  what are the core parts of hadoop?

15) what is the cluster throughput ?

16) what is the cluster benchmarks?

17) what are the tools are you using in your company?

18) explain the map reduce flow?

19) did you write any map reduce programs ?

20) what is the input format did you use in your use case?

21) what are the monitoring tools did you use?

22) did you use PUPPET? Nagios in your project?

23) how do you know the job failures expect log and alert mechanism ?

24) how many jobs are running daily how much data can handle?

25) explain about your role in your architecture ?

26) do you have any idea about developemnt/admin depends on your position?

27) did you write a reducers in your programs?

28) how to write custom input format ?

29) how to read the mail contents in your HDFS?

30) difference between PIG and HIVE?

31) what is HDFS federation ?

32) how to tune the low performed jobs ?

33) how to trouble shoot the cluster if your not a admin ?

34) how to tune the map reduce programs ?

35) should you explain what is PIG?

36) what is GROUP and COGROUP?

37)   did you write UDF's ?

38) explain one UDF how you can make it done and why you used?

39) did you use any external tables in HIVE?

40) did you create any table in Hive /Pig  explain the syntax?

41) what is Flume?

42) in a cluster we have 4 nodes flume agent is single or 4 needs to install?

43) what is the channel in flume did you use?

44) can you set up your own cluster if i give the machines?

45) can you explain your starting to ending flow if failure anything happens in millions of jobs how to trace it ?

Still in progress...