Finalizat

Reading and Writing Parquet file with nested datatype using Pyspark

Job Description:

Please find the images attached

Read the parquet file line by line , column by column, each and every column value will be passed to another function will return some value, with that new value the string has to be replaced in the current column value and and write the records ( with changed values) to new parquet file......while writing we have to make sure that order of the records, schema structure everything should be same ( apart from changed values)

For ex: in the Sample [login to view URL] ,we see [login to view URL] for all old names James, Michael,Robert , Washington...

for old_name --> James , create a function by name transformer() and if we pass [login to view URL] ---> brown should replace with black,

for old_name --> Michael if we pass [login to view URL] ---> null should replace with black

the changes should be appear in the new new parquet file by name [login to view URL] with same schema structure , order of columns,order of records

Note:- sample data is just for input data, logic should be dynamic , parquet file schema will not be the same all the time.....our code should read the parquet file schema dynamically and and create the parquet file with changed data ( xxx) ....the rows, schema and columns should be same

Code Snippet for sample data

dataDictionary = [

('James',{'hair':'black','eye':'brown'}, ("James","","Smith")),

('Michael',{'hair':'brown','eye': None}, ("Michael","Rose","")),

('Robert',{'hair':'red','eye':'black'}, ("Robert","","Williams")),

('Washington',{'hair':'grey','eye':'grey'}, ("Maria","Anne","Jones"))

]

schema = StructType([

StructField('old_name', StringType(), True),

StructField('properties', MapType(StringType(),StringType()),True),

StructField('name', StructType([

StructField('firstname', StringType(), True),

StructField('middlename', StringType(), True),

StructField('lastname', StringType(), True)

]))

])

Sample data screen shot has the sample data

Sample schema screen shot has the schema details

Aptitudini: PySpark

Despre client:
( 2 recenzii ) Mountain House, United States

ID Proiect: #31183672

Acordat lui:

hmrizak

Hello, When viewing you job details, it really hooked me because 've so much experience in this area. With solid experience in data analysis and Microsft certifications in Data managment and analysis, Sql Server an Mai multe

$7 USD / oră
(0 Recenzii)
0.0

2 freelanceri licitează în medie 8$/oră pentru acest proiect

ahmadndiayee

Hi, I am an experienced Data Engineer with a solid background in Spark. I have worked on many projects with Spark, Scala, Python, Cassandra, Snowflake, AWS,... Let's have a call for more details about the project. Mai multe

$8 USD / oră
(1 părere)
1.8