Data Wrangling Demystified

Data is purposeful when it’s useful. Interestingly, the journey between data creation to data output isn’t as simple as one would think: Data preparation is a technical process that generally follows a closed loop process of data analysis, cleansing, synthesizing, labelling and finally, formatting for recursive availability. Data wrangling makes this whole process meaningful by sorting through the various forms of data in a dataset. In other words, we can say that it is a process of refining data into meaningful chunks, which in turn, are easily consumed by data engineers or other users who transform this into meaningful charts or infographics.

So how exactly is data wrangling helpful? It makes data purposeful by accounting for factors such as the data’s:

a)            Trustworthiness (i.e. authenticity)

b)            Context

c)            Descriptiveness

d)            Clarity (i.e. labelling and removing the noise from data)

Data Wrangling: Important, But Difficult

Although data wrangling is a critical aspect of any data management process, knowing the context for which the data is being prepared is key. This is where most users struggle. As we mention in another article, 50-80% of a data scientist’s time is spent data wrangling. Of that 80% time, analysts spend:

·               60% organizing and cleaning data;

·               19% collecting datasets;

·               9% mining data to draw patterns;

·               3% training datasets;

·               4% refining algorithms; and

·               5% on other tasks.

Getting Started With Data Wrangling Tools

Eighty percent sounds like a daunting number. Fortunately, technology leaders have developed solutions to deal with the increasing volumes of data produced each day.  Microsoft offers the Microsoft SQL Server, which is part of the ETL (extract, transform, load) cycle. Data wrangling occurs via DTS (Data Transformation Services) where you can load different data sources like Text Files, CSV, Excel, MS-Access, PostgreSQL, MySQL etc. The tool then transforms this data according to user requirements. Open source tools exist as well.

Challenges Using These Tools

Some of the challenges faced by data engineers in using these tools is that there are no common best practices procedures adapted. There is a need to define a standard operating procedure in all the tools so that a data engineer competent in one tool can easily adapt to another with ease. Another limitation faced by ETL developers is the loading of very large datasets into a repository that sometime results in failure due to factors like network latency or application unresponsiveness. The best way to deal with this situation is to load a small chunk of data into repository to check for all the validity it requires. Once successful, one can move on to loading bigger chunks of data thus saving a considerable amount of time.

Despite its challenges, data wrangling remains a critical part of the process. We foresee exciting new developments in this area for 2020.