Introduction to DataExplorer

Boxuan Cui

This document introduces the package DataExplorer, and shows how it can help you with different tasks throughout your data exploration process.

There are 3 main goals for DataExplorer:

The remaining of this guide will be organized in accordance with the goals. As the package evolves, more content will be added.

Data We will be using the nycflights13 datasets for this document. If you have not installed the package, please do the following: install.packages ( "nycflights13" ) library (nycflights13) (nycflights13) There are 5 datasets in this package: airlines

airports

flights

planes

weather If you want to quickly visualize the structure of all, you may do the following: library (DataExplorer) (DataExplorer) list (airlines, airports, flights, planes, weather) data_list plot_str (data_list) (data_list) You may also try plot_str(data_list, type = "r") for a radial network. Now let’s merge all tables together for a more robust dataset for later sections. merge (flights, airlines, by = "carrier" , all.x = TRUE ) merge_airlines merge (merge_airlines, planes, by = "tailnum" , all.x = TRUE , suffixes = c ( "_flights" , "_planes" )) merge_planes merge (merge_planes, airports, by.x = "origin" , by.y = "faa" , all.x = TRUE , suffixes = c ( "_carrier" , "_origin" )) merge_airports_origin merge (merge_airports_origin, airports, by.x = "dest" , by.y = "faa" , all.x = TRUE , suffixes = c ( "_origin" , "_dest" )) final_data

Exploratory Data Analysis Exploratory data analysis is the process to get to know your data, so that you can generate and test your hypothesis. Visualization techniques are usually applied. To get introduced to your newly created dataset: introduce (final_data) (final_data) rows 336,776 columns 42 discrete_columns 16 continuous_columns 26 all_missing_columns 0 total_missing_values 809,170 complete_rows 906 total_observations 14,144,592 memory_usage 97,254,656 To visualize the table above (with some light analysis): plot_intro (final_data) (final_data) You should immediately notice some surprises: 0.3% complete rows: This means only 0.3% of all rows are not completely missing! 5.7% missing observations: Given the 0.3% complete rows, there are only 5.7% total missing observations. Missing values are definitely creating problems. Let’s take a look at the missing profiles. Missing values Real-world data is messy, and you can simply use plot_missing function to visualize missing profile for each feature. plot_missing (final_data) (final_data) From the chart, speed variable is mostly missing, and probably not informative. Looks like we have found the culprit for the 0.3% complete rows. Let’s drop it: drop_columns (final_data, "speed" ) final_data Note: You may store the missing data profile with profile_missing(final_data) for additional analysis. Distributions Bar Charts To visualize frequency distributions for all discrete features: plot_bar (final_data) (final_data) ## 5 columns ignored with more than 50 categories. ## dest: 105 categories ## tailnum: 4044 categories ## time_hour: 6936 categories ## model: 128 categories ## name: 102 categories Upon closer inspection of manufacturer variable, it is not hard to identify the following duplications: AIRBUS and AIRBUS INDUSTRIE

CANADAIR and CANADAIR LTD

MCDONNELL DOUGLAS, MCDONNELL DOUGLAS AIRCRAFT CO and MCDONNELL DOUGLAS CORPORATION Let’s clean it up and look at the manufacturer distribution again: which (final_data $ manufacturer == "AIRBUS INDUSTRIE" ),] $ manufacturer <- "AIRBUS" final_data[(final_datamanufacturer),]manufacturer which (final_data $ manufacturer == "CANADAIR LTD" ),] $ manufacturer <- "CANADAIR" final_data[(final_datamanufacturer),]manufacturer which (final_data $ manufacturer %in% c ( "MCDONNELL DOUGLAS AIRCRAFT CO" , "MCDONNELL DOUGLAS CORPORATION" )),] $ manufacturer <- "MCDONNELL DOUGLAS" final_data[(final_datamanufacturer)),]manufacturer plot_bar (final_data $ manufacturer) (final_datamanufacturer) Feature dst_origin and tzone_origin contains only 1 value, so we should drop them: drop_columns (final_data, c ( "dst_origin" , "tzone_origin" )) final_data Frequently, it is very beneficial to look at bivariate frequency distribution. For example, to look at discrete features by arr_delay: plot_bar (final_data, with = "arr_delay" ) (final_data, ## 5 columns ignored with more than 50 categories. ## dest: 105 categories ## tailnum: 4044 categories ## time_hour: 6936 categories ## model: 128 categories ## name: 102 categories The resulting distribution looks quite different from the regular frequency distribution. Histograms To visualize distributions for all continuous features: plot_histogram (final_data) (final_data) Immediately, you could observe that there are datetime features to be further treated, e.g., concatenating year, month and day to form date, and/or adding hour and minute to form datetime. For the purpose of this vignette, I will not go deep into the analytical tasks. However, we should treat the following features based on the output of the histograms. Set flight to categorical, since that is the flight number with no mathematical meaning: update_columns (final_data, "flight" , as.factor) final_data Remove year_flights and tz_origin since there is only one value: drop_columns (final_data, c ( "year_flights" , "tz_origin" )) final_data QQ Plot Quantile-Quantile plot is a way to visualize the deviation from a specific probability distribution. After analyzing these plots, it is often beneficial to apply mathematical transformation (such as log) for models like linear regression. To do so, we can use plot_qq function. By default, it compares with normal distribution. Note: The function will take a long time with many observations, so you may choose to specify an appropriate sampled_rows : final_data[, c ( "arr_delay" , "air_time" , "distance" , "seats" )] qq_data plot_qq (qq_data, sampled_rows = 1000L) (qq_data,1000L) From the chart, air_time, distance and seats seems skewed on both tails. Let’s apply a simple log transformation and plot them again. update_columns (qq_data, 2 : 4 , function (x) log (x + 1 )) log_qq_data plot_qq (log_qq_data[, 2 : 4 ], sampled_rows = 1000L) (log_qq_data[,],1000L) The distribution looks better now! If necessary, you may also view the QQ plot by another feature: final_data[, c ( "name_origin" , "arr_delay" , "air_time" , "distance" , "seats" )] qq_data plot_qq (qq_data, by = "name_origin" , sampled_rows = 1000L) (qq_data,1000L) Correlation Analysis To visualize correlation heatmap for all non-missing features: plot_correlation ( na.omit (final_data), maxcat = 5L) (final_data),5L) ## 11 features with more than 5 categories ignored! ## dest: 100 categories ## tailnum: 3246 categories ## carrier: 16 categories ## flight: 3773 categories ## time_hour: 6642 categories ## name_carrier: 16 categories ## manufacturer: 24 categories ## model: 121 categories ## engine: 6 categories ## name: 100 categories ## tzone_dest: 7 categories You may also choose to visualize only discrete/continuous features with: plot_correlation ( na.omit (final_data), type = "c" ) (final_data), plot_correlation ( na.omit (final_data), type = "d" ) (final_data), Principal Component Analysis While you can always do plot_prcomp(na.omit(final_data)) directly, but PCA works better with cleaner data. To perform and visualize PCA on some selected features: na.omit (final_data[, c ( "origin" , "dep_delay" , "arr_delay" , "air_time" , "year_planes" , "seats" )]) pca_df plot_prcomp (pca_df, variance_cap = 0.9 , nrow = 2L, ncol = 2L) (pca_df,2L,2L) Slicing & dicing Often, slicing and dicing data in different ways could be crucial to your analysis, and yields insights quickly. Boxplots Suppose you would like to build a model to predict arrival delays, you may visualize the distribution of all continuous features based on arrival delays with a boxplot: ## Reduce data size for demo purpose final_data[, c ( "arr_delay" , "month" , "day" , "hour" , "minute" , "dep_delay" , "distance" , "year_planes" , "seats" )] arr_delay_df ## Call boxplot function plot_boxplot (arr_delay_df, by = "arr_delay" ) (arr_delay_df, Among all the subtle changes in correlation with arrival delays, you could immediately spot that planes with 300+ seats tend to have much longer delays (16 ~ 21 hours). You may now drill down further to verify or generate more hypotheses. Scatterplots An alternative visualization is scatterplot. For example: final_data[, c ( "arr_delay" , "dep_time" , "dep_delay" , "arr_time" , "air_time" , "distance" , "year_planes" , "seats" )] arr_delay_df2 plot_scatterplot (arr_delay_df2, by = "arr_delay" , sampled_rows = 1000L) (arr_delay_df2,1000L)

Feature Engineering Feature engineering is the process of creating new features from existing ones. Newly engineered features often generate valuable insights. For functions in this section, it is preferred to use data.table objects as input, and they will be updated by reference. Otherwise, output object will be returned matching the input class. Replace missing values Missing values may have meanings for a feature. Other than imputation methods, we may also set them to some logical values. For example, for discrete features, we may want to group missing values to a new category. For continuous features, we may want to set missing values to a known number based on existing knowledge. In DataExplorer, this can be done by set_missing . The function automatically matches the argument for either discrete or continuous features, i.e., if you specify a number, all missing continuous values will be set to that number. If you specify a string, all missing discrete values will be set to that string. If you supply both, both types will be set. ## Return data.frame set_missing (final_data, list (0L, "unknown" )) final_df ## Column [dep_time]: Set 8255 missing values to 0 ## Column [dep_delay]: Set 8255 missing values to 0 ## Column [arr_time]: Set 8713 missing values to 0 ## Column [arr_delay]: Set 9430 missing values to 0 ## Column [air_time]: Set 9430 missing values to 0 ## Column [year_planes]: Set 57912 missing values to 0 ## Column [engines]: Set 52606 missing values to 0 ## Column [seats]: Set 52606 missing values to 0 ## Column [lat_dest]: Set 7602 missing values to 0 ## Column [lon_dest]: Set 7602 missing values to 0 ## Column [alt_dest]: Set 7602 missing values to 0 ## Column [tz_dest]: Set 7602 missing values to 0 ## Column [tailnum]: Set 2512 missing values to unknown ## Column [type]: Set 52606 missing values to unknown ## Column [manufacturer]: Set 52606 missing values to unknown ## Column [model]: Set 52606 missing values to unknown ## Column [engine]: Set 52606 missing values to unknown ## Column [name]: Set 7602 missing values to unknown ## Column [dst_dest]: Set 7602 missing values to unknown ## Column [tzone_dest]: Set 7602 missing values to unknown plot_missing (final_df) (final_df) ## Update data.table by reference # library(data.table) # final_dt <- data.table(final_data) # set_missing(final_dt, list(0L, "unknown")) # plot_missing(final_dt) Group sparse categories From the bar charts above, we observed a number of discrete features with sparse categorical distributions. Sometimes, we want to group low-frequency categories to a new bucket, or reduce the number of categories to a reasonable range. group_category will do the work. Take manufacturer feature for example, suppose we want to group the long tail to another category. We could try with bottom 20% (by count) first: group_category ( data = final_data, feature = "manufacturer" , threshold = 0.2 ) final_data, ## manufacturer cnt pct cum_pct ## 1 AIRBUS 88193 0.2618744 0.2618744 ## 2 BOEING 82912 0.2461933 0.5080677 ## 3 EMBRAER 66068 0.1961779 0.7042456 As we can see, manufacturer will be shrinked down to 4 categories, i.e., AIRBUS, BOEING, EMBRAER, and OTHER. If you like this threshold, you may specify update = TRUE to update the original dataset: group_category ( data = final_data, feature = "manufacturer" , threshold = 0.2 , update = TRUE ) final_df plot_bar (final_df $ manufacturer) (final_dfmanufacturer) Instead of shrinking categories by frequency, you may also group the categories by another continuous metric. For example, if you want to bucket the carrier with bottom 20% distance traveled, you may do the following: group_category ( data = final_data, feature = "name_carrier" , threshold = 0.2 , measure = "distance" ) final_data, ## name_carrier cnt pct cum_pct ## 1 United Air Lines Inc. 89705524 0.2561422 0.2561422 ## 2 Delta Air Lines Inc. 59507317 0.1699153 0.4260575 ## 3 JetBlue Airways 58384137 0.1667082 0.5927657 ## 4 American Airlines Inc. 43864584 0.1252495 0.7180152 Similarly, if you like it, you may add update = TRUE to update the original dataset. group_category ( data = final_data, feature = "name_carrier" , threshold = 0.2 , measure = "distance" , update = TRUE ) final_df plot_bar (final_df $ name_carrier) (final_dfname_carrier) Dummify data (one hot encoding) To transform the data into binary format (so that ML algorithms can pick it up), dummify will do the job. The function preserves original data structure, so that only eligible discrete features will be turned into binary format. plot_str ( list ( "original" = final_data, final_data, "dummified" = dummify (final_data, maxcat = 5L) (final_data,5L) ) ) ## 11 features with more than 5 categories ignored! ## dest: 105 categories ## tailnum: 4044 categories ## carrier: 16 categories ## flight: 3844 categories ## time_hour: 6936 categories ## name_carrier: 16 categories ## manufacturer: 32 categories ## model: 128 categories ## engine: 7 categories ## name: 102 categories ## tzone_dest: 8 categories Note the maxcat argument. If a discrete feature has more categories than maxcat , it will not be dummified. As a result, it will be returned untouched. Drop features After viewing the feature distribution, you often want to drop features that are insignificant. For example, features like dst_dest has mostly one value, and it doesn’t provide any valuable information. You can use drop_columns to quickly drop features. The function takes either names or column indices. identical ( drop_columns (final_data, c ( "dst_dest" , "tzone_dest" )), (final_data,)), drop_columns (final_data, c ( 36 , 37 )) (final_data,)) ) ## [1] TRUE