Wrapping up an internship at Wrangle

The names node; built by Neel during his internship.

Hi everyone! My name’s Neel Runton, I’m an eleventh grader at Research Triangle High School from Cary, North Carolina, and for the past few weeks I’ve been interning at Wrangle. Because I am in the North Carolina School of Science and Math Online Program, I was given the opportunity to join the Entrepreneurship Fellowship Program, a subsidiary of NCSSM’s Summer Research and Innovation Program. As part of this program I was chosen to intern at a Durham based tech startup and because of my interest in coding Python and curiosity about software development, Wrangle was the perfect fit for me.

When I found out I would be interning with Wrangle, I was super excited. I imagined myself working in an open area of about 15-20 people all with standing desks working at their computers – the stereotypical tech startup. I had initially thought I would only be working with one person at the company, and that since I was an intern, I wouldn’t be doing anything all that important. However, when I actually started working, I found that my experience would be much different. First, I found out that Wrangle only had 3 employees, which initially shocked me and made me reconsider how stereotypical of a startup this was, but later gave me reassurance that I would be working closely with everyone at Wrangle and that I would learn important lessons about early stage tech startups. Then, I found out that I would actually be writing code that would make it into the product. That really excited me because at that point I realized I wouldn’t be treated as an intern so much as a new employee. Finally, I realized that I would learn a lot from this experience. From working closely with everyone at Wrangle, to actually writing code in the product, to learning about the work needed to start a business, I knew that this experience would be invaluable to me.

During my time at Wrangle I have worked on a wide range of things, which is not what I thought I was going to be doing before I got here. I started off learning how data wrangling works and the Python code behind it by actually making my own program to clean a sample dataset I was given. After that, I started developing the actual code by implementing analytics into the backend of the product. During this time I became increasingly familiar with the complex code that gives Wrangle its sophistication and functionality. This period of increasing familiarity allowed me to tackle my next project at Wrangle: the names node.

Developing the names node was probably the most valuable learning experience for me. Through developing the names node I learned the complexity of not only the backend code and how it works, but also how the frontend code works and how it is possible to seamlessly integrate new functionality into the product. This was also the most rewarding experience for me because it allowed me to actually work on something visible in the product, which was something I never thought I’d be able to do. Having something I could actually see in the product after I had completed the node really showed me how much I love software development and that I could really see myself doing this in the future. After the names node was completed, I started working on a SQL query to analyze the analytics data I had gathered from the trackers I implemented in the first week. This activity was also very valuable because it taught me how to use databases, and how to further analyze data from databases using SQL.

Overall, my past few weeks interning at Wrangle has been one of the best learning experiences for me. Not only have I learned lots about software development, like how to use Git which allows me to collaborate with other developers on projects and how much work and complex code it takes to build a product that a user can seamlessly use, but I have also learned valuable skills about entrepreneurship, like how hard it is to get paying customers, and that every company starts somewhere. In addition to that, I’ve learned that I especially like working at early stage tech startups where I would have the ability to work on the entire product and communicate with everyone at the company which allows me to learn how the entire product works rather than working on one small product development team at a larger corporation where I would have very limited interaction between people outside my development team.

Thank you to everyone at Wrangle who have not only taught me these lessons, but have practically held my hand through the learning process; this experience has been nothing short of life changing.

Solutions to common Salesforce data problems

problem solving on a chalkboard

Salesforce currently tops the charts as the #1 CRM provider worldwide. With $10.5 billion in 2018 FY Revenue, the company boasts an impressive 25% increase in year over year revenue growth. With such an enormous and highly varied user base, it’s not surprising that mistakes are made along the way when people try to go in and setup or update a Salesforce integration.

Poor data quality has been cited as one of the top three problems why CRM projects fail. When it comes to your organization’s critical data, it’s important to make sure your data is in top shape. We’ve compiled a list here for you of the most common data problems with Salesforce, and most importantly, how you can fix them.

Duplicate Records

It’s really easy to get into a situation where your data has been inadvertently duplicated. You might have a customer who enters their full name in one form, and a nickname in another. Or you might have a typo or incomplete entry along the way that prevents your CRM from detecting that the entry is a duplicate.

Salesforce does offer some built-in tools to help with this, most notably their Duplicate Management suite. However, sometimes these tools just aren’t enough, or aren’t implemented correctly.

Here’s a few tools and techniques you can use to de-dupe your Salesforce data:

  • Cloudingo – Cloudingo is focused specifically around Salesforce, and offers a managed dedupe feature.
  • DupeBlocker – From Validity (formerly DemandTools), DupeBlocker is a real-time dupe blocker for Salesforce.
  • Fuzzy matching in Salesforce – Using fuzzy matching, you can setup rules to track down and manually manage your duplicate records.

Remember – always make a backup before performing major deduplication work. Better safe than sorry.

Incomplete Records

Ever sorting through Salesforce data only to find a bunch of incomplete entries? This requires hours of manual labor in order to fill in missing data. For example, what happens when phone numbers or company information is missing from your leads? 

Here’s a few tips on how to manage incomplete records:

  • Find the source – this is perhaps the most important of all. In the event that your records are incomplete, finding out how they got that way is critical. Delegate a point person to ensure that records are entered fully and completely moving forwards.
  • Use a dedicated tool like ZoomInfo to enrich your data. ZoomInfo provides services to find and fill missing data fields.
  • Try Uplead, which is another tool that offers data enrichment services for your leads.

Inconsistent Data

While Salesforce offers a robust suite of reporting tools, none of it is going to be particularly valuable within your organization if your data isn’t formatted correctly. For example, how can you report across order totals when the amounts are entered in different currencies?

Many people find themselves turning to Excel to manage data problems like this. The problem with this is that it’s not only a huge time investment, it’s also something that’s hard to scale across your organization. Not everyone has the same level of skill in Excel, and finding ways to introduce repeatable processes when users are manually wrangling data in Excel can be difficult if not downright impossible.

Here’s a few useful tools to help you deal with data inconsistencies:

  • A data platform like Wrangle will allow you to find and fix problems with bad data. You can quickly fix bad formatting, inconsistent casing, and more.
  • A spreadsheet based tool like Airtable offers options for formatting data based on field types.
  • If you’re really attached to Excel, AbleBits is a useful collection of Excel add-ons to help make you more productive.

Invalid Records

Sometimes, although a record might look valid and complete, it’s only because a user took a moment to enter some bogus yet real-looking data into a few form fields. This happens often in online lead generation forms, and results in a bunch of real-looking(yet garbage) data in your CRM.

Excel isn’t as helpful here; you’ll need to look towards tools to help you validate your data based on the specific type of field that you’re looking at. For example, you may need to validate that an email address that was entered isn’t fake, or that a zip code is correct based on additional location data.

Here’s where tools geared towards specific types of data shine:

  • For email addresses in particular, you can try tools like NeverBounce, XVerify, or Hunter to help ensure that your email addresses are valid.
  • For working with different data types in addition to email addresses, you can try Wrangle, which offers intelligent validation for other fields types like zip codes, phone numbers, addresses.
  • For address data, Experian offers a free address lookup tool to verify that your address data is correct.


Have you solved any Salesforce data quality problems that we didn’t cover here? Let us know!

Announcing a fresh new look for Wrangle!

New Wrangle user interface

Recently, we rolled out a brand new version of Wrangle! We’re excited about this set of changes, and here to walk you through what we changed, and what you can expect from the new Wrangle.

We’re a small startup based in the heart of Durham, NC. We started working on the first version of our data platform in early 2019 and have been iterating ever since. Our mission is to make it easier for people to work with data without needing to write code.

Our first version of Wrangle, had a number of interesting features. For example, we made the fact that you could write Python code really prominent in the interface. We wanted to enable developers to easily Wrangle their datasets by writing code. While we still allow this as a step in the new version of Wrangle, this is now part of a more repeatable set of steps.

Old Wrangle user interface

We also wanted to pave the way towards automation with our new platform. While you can’t (yet) use Wrangle programmatically, our new platform gets us a very large step closer to rolling out automatable workflows. We’re excited to keep moving forward along this path, as we believe that there are certain types of data manipulations that could be improved through automation.

Here’s an overview of what’s new:

Drag and drop interface

With this interface, you’ll see we’ve designed a repeatable model of visual “Steps” that you can combine creatively in order to transform your data.

  • For the astute observers out there, you may notice that Projects are now called Wrangles. Each Wrangle includes a canvas for you to add Data, and pick Steps to connect to your data. 
  • You can chain together as many Steps as you want until you’re ready to export your data as a CSV.
Example of dragging in a step

Steps to fix and format data

We’ve introduced a number of customizable steps to help you find and fix problems in your data.

  • Bad email addresses due to invalid MX records? We can help you automatically clear, split, or modify these entries in your datasets. 
  • Problems with Excel stripping out the leading zeros from your zip code data? We’ll help you find and fix these entries.
  • We’ve also added the ability to fix casing, date formatting, and location data.
Example of a fix step

Steps to modify data

Chain these together into your Wrangles to create powerful and repeatable workflows that you can share with your teammates.

  • Need to add a custom field to a huge dataset? No problem. Our scalable system allows you to work with large datasets without running into the kind of performance problems you might see in Excel.
  • Easily merge columns together with custom delimiters (think “address” + “city” + “state” + “zip” for example), and find and replace values within your columns.
  • While the ability to write rules using Python isn’t quite so prominent in the new UI, you’ll see this is still possible; for those of you coders and data scientists out there we have a Python Code step just for you.
Example of a modify step

This is just the beginning. We’re excited to keep building out our data platform, and invite you to join us on our journey! Try out the new Wrangle free, or drop us a line and let us know how you’re using our platform.

Salesforce acquires Tableau

Salesforce acquires Tableau.

Last week, Salesforce announced a massive acquisition, buying Tableau for a whopping $15.7 billion in an all stock deal. Or $14.3 billion depending on how you take your math. Either way, it’s a massive deal for Tableau, the 16 year old market leader in data visualization. Salesforce boasts a greater market share increase for 2018 in North America, Western Europe and Asia-Pacific than all the other CRM vendors combined. This paves the way for Tableau to expand on a scale that wasn’t previously possible.

“Data is the foundation of every digital transformation, and the addition of Tableau will accelerate our ability to deliver customer success by enabling a truly unified and powerful view across all of a customer’s data.”

Keith Block, co-CEO of Salesforce.

This isn’t the only billion dollar deal to happen in the BI space so far this year, Google recently bought Looker for $2.6 billion. While Looker’s data modeling requires some coding, Tableau’s point and click solution is tailored more towards citizen data scientists. The trend towards enabling business users to analyze data is on the rise.

“By 2019, citizen data scientists will surpass data scientists in the amount of advanced analysis produced”.

Gartner, 2017.

Tableau’s user conference (dubbed TC, for Tableau Conference) attendance is a great indicator of its rise in popularity within the BI market; TC18 attendees were up  to 17k+, a whopping 309% increase from just a few years ago at TC14. It’s worth noting that not only are individual attendees expressing interest here, but given that ticket prices are $1600+, this is a massive investment on behalf of companies who are often footing the bill for their employees to attend.

Tableau conference attendance 2014-18
TC attendance 2014-18

One of the most widely touted strengths of Tableau is it’s ease of use. In 2018, they were named as a leader in Gartner’s Magic Quadrant for Analytics and BI platforms the 6th consecutive year.

While Tableau’s pricing and packaging is likely to change over time with their move into Salesforce, it’s encouraging to see the analytics market turning more towards tools like Tableau that have clearly placed a premium on intuitive user interfaces.
Here at Wrangle, we’re doing the same. Check out our data platform, and follow us on twitter as we forge a path forward in today’s data centric world.

The Six Phases of Data Wrangling

6 phases of data wrangling

Data wrangling (also sometimes referred to as data munging) is the process of cleaning and transforming a dataset prior to storage or analysis. Data wrangling is something that’s becoming increasingly critical for businesses today. We know from Gartner research that the average financial impact of poor data quality on organizations is $9.7 million annually.

As a core foundation for effective data quality, data wrangling techniques can vary depending on use cases. For some organizations, data wrangling is the first step in building out a WDI (Web data integration) strategy. For others, data wrangling may be a precursor to building an effective machine learning pipeline.

No matter what your use case, data wrangling follows these six phases:

1. Discover

This is an exploratory process where datasets are opened and analyzed for problems or inconsistencies. You can think of this as the planning stage in the process. Before data can be transformed, it’s critical to explore and understand how it’s structured.

2. Structure

This is the architectural phase of the process. Here’s where you’ll define your ideal schema and how you’re going to get there. You may want to join together disparate datasets, or re-organize your data to make it easier to work with down the road.

3. Clean

Whether bad data comes in as a result of a programming error, or incorrect manual entry, data cleansing is something that is needed for just about every dataset. You’ll want to make sure that your data doesn’t have errors or inconsistencies that can result in problems down the road. For example, if you have names cased incorrectly, then you won’t be able to use the data when sending emails.

4. Enrich

It’s not uncommon for some datasets to be missing key fields. The enrichment process involves filling in the gaps within your data. Sometimes this is also referred to as data hydration. You may want to say, add some additional location data to your leads before adding them to your system of record.

5. Validate

Once you think your data is in good shape, you’ll want to test it. This can mean different things for different types of data. For example, for date fields, you may need to ensure your dates are formatted correctly to include timestamps. For emails, you might want to make sure not only that they’re formatted correctly, but also that the addresses can actually receive emails

6. Publish

Once your data is ready to go, you’ll likely find yourself passing it along to a database, or visual analysis tool like Tableau. The better you can get at the previous five steps, the faster you can get to this one; and off to analysis!



Whether you’re using Excel, Python, or an automated tool like Wrangle for your data wrangling, it’s important to work out a clear process for yourself and your team so that you can ensure a high standard of data quality.

The top 10 ETL tools for marketing in 2019

Person organizing blocks of data

Marketing organizations are transforming the way businesses operate today. What’s at the core of this transformation? Data. From lead data, to website analytics data, to customer behavior data, today’s leading marketers are becoming experts at dealing with data.

“Data isn’t an abstract goal at leading companies; it is part of their culture. They are more than twice as likely to say that they routinely take action based on insights and recommendations from analytics than their peers in the mainstream”

Econsultancy/google study

This is where ETL tools come in. ETL (extract, transform, load) is a type of software that enables you to effectively take data out of one source, transform it, and load it into the end target. An example of this would be taking data out of Marketo, adding some additional tracking metadata or fields, and then inserting the modified data into Salesforce.

Here’s a breakdown of the top 10 marketing focused ETL tools today:

1. Tableau Prep

If you’re already working within the Tableau ecosystem, then Tableau Prep is an ideal choice. It’s included free with Tableau’s Creator license, and boasts a user-friendly interface that will be familiar to existing Tableau users. There is a 14 day free trial available.


Tableau Prep
Tableau Prep

Features include:

  • Three different views – the full data prep process, column profiles, and row-level data.
  • Click-to-edit – immediately updates your data as you work.
  • Fuzzy clustering – take advantage of smart features like grouping by pronunciation to help organize your data.
  • 40+ supported data sources – choose from cloud or on-premises data sources. Works with all the major data providers.

2. Wrangle

Wrangle is a self-service cloud based platform to help you clean, manipulate, and analyze your data. Wrangle allows for easy data transformations via an intuitive drag and drop interface. There’s also a free version available.

Wrangle
Wrangle

Features include:

  • Highly performant – Wrangle is built to scale for large datasets.
  • Custom steps – create a reusable pipeline to handle your specific use cases.
  • Easy auditing – quickly see what changes have been applied to your data.
  • Publishable workflows – share your data transformation steps with your team, or the general public.

3. Improvado

Offers 150+ marketing API connections without writing a line of code. Improvado lets marketers streamline data sources and create custom reports and visualizations.

Improvado
Improvado

Features include:

  • Several supported data warehouses – PostgreSQL and Google BigQuery.
  • More than 150 integrations – includes Google Analytics, Campaign Manager, HubSpot, and more.
  • Public REST API – allows for custom integrations.
  • Whitelabelling – customize the branding of your reporting UI.

4. Xplenty

Scalable and intuitive interface for building data driven workflows. Xplenty offers a drag and drop interface and out of the box data transformations. Omnichannel marketers will find support here for integrating all sorts of marketing sources into a single source of truth.


Xplenty
Xplenty

Features include:

  • Highly performant – setup to process billions of records per hour if you need it.
  • 100+ integrations – including Salesforce, Hubspot, Facebook Ads, and Google AdWords.
  • Job scheduling – automate your workflows with scheduled jobs.

5. Parabola

Parabola is a drag and drop tool to allow anyone to create a data workflow. Mix and match steps to create custom workflows to suit your needs.


Parabola
Parabola

Features include:

  • Address converter – converts address data using the Google Maps API.
  • Sentiment analysis – analyzes text for sentiment using Google’s Machine Learning API.
  • Conditional logic – add columns conditionally within your workflows.

6. Starfish

StarfishETL is a platform that specializes in migrating your CRM data. If you’re looking to migrate your cloud or on-premise data then it’s worth taking a look at this self-service migration wizard.

Starfish
Starfish

Features include:

  • Custom field support – easily migrate all your custom fields.
  • Undo – prevent costly mistakes by easily undo-ing any of your data loading actions.
  • Data testing – test your migrations before you start them.

7. Funnel

Funnel is an ETL tool for marketers and advertisers. Connecting to over 400 data sources. If you’re looking for a tool that makes ROAS tracking dead simple, then Funnel might be worth a look.


Funnel
Funnel

Features include:

  • Unlimited data sources – if you can collect it, Funnel will consume it.
  • Currency conversion – automated currency conversions with adjustments for exchange rates.
  • Custom dimensions – create meaningful groups out of your marketing data.

8. EasyMorph

A data preparation and work automation tool. Designing workflows is 100% visual, no coding required. A free version is also available.

EasyMorph
EasyMorph

Features include:

  • Built-in data visualization – get insights into trends and patterns.
  • Customizable automation – you can perform specific actions based on the state of your data.
  • Visual transformations – follow the logic of your data transformations.

9. Tray

Tray’s mission is to enable citizen automators to integrate and automate cloud applications. Tray offers a drag and drop builder to create automated workflows around data processing and routing.


Tray
Tray

Features include:

  • Automated retries – if a connection fails, Tray will keep trying in order to keep processes running smoothly.
  • Universal connector – connects to most RESTful APIs.
  • Custom field support – no field is left behind; Tray will grab them all for you.
  • JavaScript functions – use Lodash in your workflows to add scripted functionality.

10. Adverity Datatap

Designed specifically for working with marketing data, Datatap from Adverity allows you to connect to multiple systems to easily synchronize your data.

Datatap
Datatap

Features include:

  • Security focused – uses top of the line encryption, default 2fa, and is GDPR compliant.
  • 100+ data connectors – connect to Google Analytics, Magento, Emarsys, and many more with pre-built data connectors.
  • Scheduling – easily configure job schedules to ensure the highest level of data accuracy.

Is Excel the right tool for your job?

Before Excel, Microsoft marketed a spreadsheet program called Multiplan in the early 80’s. Then, in 1985 the first version of Excel for Macintosh was released, followed by the first Windows version 2 years later.

Fast forward to today, and the use of Excel is still going strong. It’s impressive (and rare) to see an application withstand the test of time like this. Adobe Photoshop has also been around since the 80’s and is entrenched in society to the point that it’s become an official verb. With the exception of Photoshop, there’s not too many other similar stories like Excel out there.

Part of Excel’s overwhelming success has been due to the fact that it’s become nothing short of a must-have skill for today’s digital professional. Proficiency with digital productivity tools like Excel are required for up to 82% of middle-skilled positions today, while Excel certifications could earn you up to 12% more per paycheck.

From professionals using industry specific formulas (like XNPV for finance), to small business owners using Excel to run the whole business, Excel is used in nearly every industry, for a massive variety of use cases. Sometimes, it’s because it just might be the best tool for the job. But is this always true, or are we just used to reaching for the hammer because it’s there?

Nearly one in five large businesses suffered financial losses as a result of errors in spreadsheets. As a result of these, we’re even seeing standards developed specifically around good spreadsheet design for certain industries.

This should be a red flag that we may be reaching peak Excel (there’s a joke buried in there somewhere). There’s the now infamous $6 billion spreadsheet related loss attributed to the London Whale in 2012. Our everyday problems with Excel don’t need to carry such a hefty price tag before we start looking for more tailored solutions. In fact, industry experts are increasingly recommending ditching Excel for more specialized tools.

Photoshop is also starting to see similar problems with it’s all-encompassing array of features. Sketch (created by Bohemian Coding in 2010) has taken a massive swoop into the design tools market, targeting Photoshop specifically. For a tool that’s been around for 2 decades less than Photoshop, it’s impressive to watch Sketch disrupt the market for design tools the way it has so far.

Photoshop and Excel both will likely remain the right tool for a number of jobs for a long time to come. But that doesn’t mean they’re the right tool for all (or even most) jobs.

Here at Wrangle, we’re hoping to be to Excel what Sketch is to Photoshop. We’re building a platform that’s highly focused on helping you quickly and easily clean and fix your datasets. We want to help you feel confident that your data is accurate, so you can move on with making decisions that directly affect the bottom line of your business.

We’re not looking to build yet another spreadsheet based alternative to Excel. We’re taking lessons learned from the data science community in terms of how to work with massive datasets, and bringing that to a self service application that anyone can use.

Mailchimp on the Rise

…and what this means for your data.

Person reading digital marketing book

It’s no secret that marketing automation is all the rage these days. In fact, studies show that 75% of marketers say they use at least one type of marketing automation tool in 2019. And this is steadily trending up and to the right.

“Spending for Marketing Automation tools will grow vigorously over the next few years, reaching $25.1 billion annually by 2023 from $11.4 billion in 2017”

Forrester

Recently, Mailchimp announced a move to expand from email to full marketing platform, forecasting $700M in revenue for 2019. The popular email tool is now competing in the same ring as Hubspot and Marketo, who are currently leading the marketing automation pack. Mailchimp plans to target small businesses, exclusively focusing on companies of fewer than 100 people, according to Ben Chestnut, Mailchimp’s CEO and co-founder.

As Mailchimp starts to rollout a more robust SMB focused platform, you can expect to see a wider swath of people exploring their new offerings. Some may be evaluating tools for the first time, while others may be looking to cut costs by using a cheaper tool. At $14.99/month for the recommended plan, Mailchimp comes in priced significantly lower than the competition.

For users of existing platforms, this means migrating data from one product to another.

While more established platforms like Hubspot offer migration direct integrations with Mailchimp, the reverse is not the case. In fact, if you head over to Mailchimp’s contact import integration options today, you’ll see a handful of options like connecting to Salesforce, or Eventbrite. But what about marketing platforms? Not so many to choose from.

So to make the leap from another platform into Mailchimp, you’re going to need to do it by hand.

This means you may find yourself doing a lot of manual data wrangling in Excel to get from one system to another. You may even be applying conditional formatting to catch outliers or problems along the way.

This can be tedious and time consuming. It’s not unlikely that Mailchimp will beef up their integrations with existing marketing automation platforms to ease this pain at some point in the future. But in the meantime, what are you supposed to do?

There are lots of useful resources out there like this guide from Hubspot, which focuses specifically on using Excel for marketing data. They’ll help you learn how to do things VLOOKUP, SUMIF, and so on. But what about merging data in Excel, or de-duping?

You might be using merge tags in Mailchimp to refer to a field that’s named something else in your Hubspot records. Or, you may have initially imported invalid phone numbers into Hubspot and are looking to level up your data quality as you make the move.
W

Wrangle does all of this and more for you. If you’re looking for Excel alternatives, or a way to spend less time manually combing through your marketing data, then it’s worth looking at Wrangle.

No matter what your final decision when evaluating Mailchimp, it’s worth knowing that you can quickly and easily evaluate a new marketing automation platform without all the headache that comes with complex data migrations.

5 best practices to improve marketing data quality

bravo

For today’s digital marketers, data quality is an increasingly critical skill to master. An MIT Sloan Study estimates the cost of bad data to be 15%-25% of revenue for most companies. In data quality study of executives, only 3% found that their departments fell within the minimum acceptable range of 97 or more correct data records out of 100.

As techniques like personalization become measurably more relevant in driving marketing ROI, we also know that poor data quality is a huge barrier to success. According to a Monetate Study, 23% of marketers cite data quality as their #1 obstacle in implementing successful personalization strategies.

Spend at least an hour or two a week in Excel wrangling bad data? You’re not alone. According to Experian, 30% of companies are engaging in manual data cleansing, while 29% of companies are using data cleansing tools.

Here are 5 best practices to help you increase the quality of your marketing data:

1. AUDIT – Perform regular data audits.

The most important step to improving your marketing data strategy is to first tease out where the biggest problems lie in your data. For some organizations, this could be a problem of duplicate records. For others, the inability to segment due to badly assigned categories (like say, job titles listed as “VP of Marketing”, “VP martech”, “Vice President of Marketing”) could be preventing you from running successful campaigns.

2. STANDARDIZE – Create a set of data standards.

Regardless of the tools used to collect the data, you need to ensure that the data coming into your systems is standardized across your organization. For example, you don’t want an array of different phone number formats coming in from different teams or departments. “415-203-2031” vs “(415)203 2031” will cause problems down the line as you look to join records.

Create a set of common formats that works for your organization, and begin applying them to existing as well as incoming data.

3. DEPUTIZE – Assign a data steward within your organization.

Depending on the size and needs of your organization, the data steward could be a dedicated role, or it could be a key stakeholder who takes on this assignment. Either way, it’s critical that you call out a specific individual (or group) who is responsible for data quality across the org.

Your data steward should have core knowledge both of the data itself, and what it’s being used for in order to be successful. Empower this person to make and enforce decisions around your data structure, and collection practices.

4. AUTOMATE – Employ the use of tools to automate data quality.

You’re likely already using a marketing automation tool like Hubspot or Marketo. One thing you may not be doing yet is adding a layer of automation on top of the data that’s going into these systems.

Once data gets into your core marketing automation platform in a bad state, you’re then stuck with going back after the fact to find it, and manually fix it. Take advantage of tools like Wrangle to automate this process. Fix your existing bad data, and then create a system to fix your data to the same standards before it ever enters your system of record.

5. MEASURE – Track the outputs of your efforts to improve data quality.

You want to create a process that sticks. The best way to do this is to create highly visible metrics to show the success of your efforts. With data quality, this could be anything from increasing the number of MQLs and SQLs, to decreasing the number of duplicate records that make it into your system.

Make sure you have a plan in place to measure the results of your efforts, and then share these results with the rest of your team.

5 bets practices for data quality

We’re moving forward into a world of increasingly personalized campaigns, and data-driven processes. In fact, the global augmented analytics market is forecast to grow 30.6% between 2018-2023. As we rely increasingly on data to derive marketing insights and forecasts, it’s vital to ensure that the quality of this data is as high as possible.

The 5 whys of data quality

5 people asking why

In the 1930’s, the famous Japanese inventor Sakichi Toyoda pioneered a problem solving technique called the 5 whys. It’s popular and effective tool for teasing out the details from problems across all kinds of industries. The 5 whys is a simple concept: find a problem, then drill down closer to the root of the issue by asking “why” 5 times. It’s akin to peeling back layers of an onion; each “why” gets you a little closer to the root cause of the issue.

Here’s an example:

Problem: “Our sales team isn’t hitting their numbers.”

  1. Why? Because they’re wasting a lot of time chasing bad leads.
  2. Why? Because we have no way of knowing which sales leads are good or bad.
  3. Why? Because in our CRM, the leads have invalid data or worse, are duplicates.
  4. Why? Because we don’t have a formal policy in place, or enforce accuracy or consistency.
  5. Why? Because it’s time consuming and tedious to manually police data entries.

And here is the problem with fixing our data quality problems; the results aren’t immediately quantifiable, and so it’s easy to just brush problems under the rug. “Someone else will deal with it.” We often find ourselves leaving it alone because fixing bad data is cumbersome, time consuming, and error prone.

This is a mistake. A DiscoverOrg study found that sales and marketing departments lost approximately 550 hours, and as much as $32,000 per sales rep from using bad data. A Forrester study showed that 70% of marketers believe they have poor quality or inconsistent customer data. Salesforce recently found that CRM systems contain about 15% duplicates for sales and service records. Seeing a pattern here?

To understand how to address problems of data quality, we should first understand its principles. According to Forrester, data quality hinges on these three areas:

  1. Accurate: is my data error-free?
  2. Complete: does my data provide a 360-degree customer view?
  3. Consistent: is my data consistent across platforms?

Let’s go back to our problem with bad sales leads; if we can find ways to ensure the accuracy of the data going into our systems, we can prevent our sales team from wasting their time. If we can at the same time ensure that our data remains consistent by defining a schema to automatically manage datasets from various systems, we’re ahead of the game.

Luckily, there’s a way to address your data quality issues without wasting a bunch of time. Here’s where Wrangle can help. We’re building a platform to help you both in fixing your existing bad data, and ensuring it that your incoming data in the future stays clean.

We’re here specifically to help you with these two things:

  1. Making sure that your existing bad data gets cleaned.
  2. Helping you maintain and monitor your data cleanliness in the future.

Data stewardship doesn’t have to be hard. In fact, we’re hoping we can make even make it a bit fun while you’re at it!

It’s our mission to find ways to make managing data easy for everyone, without requiring you to spend hours or days in Excel. We understand that it’s important to try and solve the right problems; and get at the root causes, not just the surface issues. More often than not, these root causes boil down to bad data. With clean, high quality data, we can help improve the overall accuracy and efficiency of your business.