Comments (5)
it depends on what you mean by more powerful - it does not support repeatability of data generation, uses the basic python random number generator without seeding
It does provide lots of support for pre-canned formats so that aspect is interesting. But we would need to override the default random number generator and it would be hard to incorporate vectorized operation.
Its not just the text generators that can be used to generate text data in the existing dbldatagen implementation - but all of Spark SQL, pyspark, pandas / numpy /scipy - thats whats leveraged by it being fully integrated with PySpark
from dbldatagen.
A more sensible approach might be to offer it as a possible integration in the future in a similar way that Factory Boy uses it rather than a replacement for the existing mechanism. So it would become an additional text generator rather than replacing the existing ones.
This would have some limitations such as only generating uniform distributed values (due to mechanics of Fakers random number generator), only support it for string columns
I've confirmed that this is at least feasible for non-repeatable data (using Pandas UDF integration in conjunction with dbldatagen) - but perf is up to 100x slower - more realistically 20x slower for smaller data sets.
So i would suggest this a possible documentation example rather than built in feature.
Update - I was able to generate 10 million rows of data in about 1.5 minutes with 12 x 8 core cluster. For comparison, dbldatagen can generate 1 billion rows of data with basic formatting AND write them out to a delta table on Azure in 1.5 minutes on same cluster. Performance for complex formatting in dbldatagen can be slower (10 -15 minutes for 1 billion rows in some cases)
Trying to do the same for 1 billion rows with parallelized Faker failed with faker after 18 minutes and it was only partially completed ( 1/3rd of the way completed)
For 100 million rows, i was able to generate faker data using an extension mechanism in dbldatagen on a 12 x 8 core cluster and write it in 5 minutes. So i think we can show an example of using faker in conjunction with dbldatgen but it does not make sense as the default mechanism.
from dbldatagen.
But current approach generates data in Pandas UDF, not in Spark. So probably setting a random seed for faker would achieve the same goal? Faker should work with custom distributions
from dbldatagen.
Well, I think it makes the most sense to have it as plugin. Where is the performance bottleneck? Maybe going down faker provider APIs will help?
from dbldatagen.
pandas udfs are only used for text generation from templates and Ipsum Lorem text.
A pandas UDF is still distributed across spark nodes.
Aside from that, I think having a generic plugin that can support faker but also other libraries is useful. It wont be bound specifically to Faker and we dont want to ship Faker, have a dependency on Faker, test Faker or require it to be preinstalled.
This mechanism would also allow use of arbitrary Python functions.
here is how i see the syntax working:
import dbldatagen as dg
from faker import Faker
from faker.providers import internet
shuffle_partitions_requested = 12 * 4
partitions_requested = 96 * 5
data_rows = 1 * 1000 * 1000
spark.conf.set("spark.sql.shuffle.partitions", shuffle_partitions_requested)
my_word_list = [
'danish','cheesecake','sugar',
'Lollipop','wafer','Gummies',
'sesame','Jelly','beans',
'pie','bar','Ice','oat' ]
# the context is shared information used across generation of many rows
# here, its the faker instance, but it could include customer lookup data, custom number generators etc
# As its a Python object, you can store anything within the bounds of whats reasonable for a Python object.
# It also gets around the issue of using objects from 3rd party libraries that don't support pickling
def initFaker(context):
context.faker = Faker()
context.faker.add_provider(internet)
# the data generation functions are lambdas or python functions taking a context and base value of the column
# they return the generated value
ip_address_generator = (lambda context, v : context.faker.ipv4_private())
name_generator = (lambda context, v : context.faker.name())
text_generator = (lambda context, v : context.faker.sentence(ext_word_list=my_word_list))
cc_generator = (lambda context, v : context.faker.credit_card_number())
email_generator = (lambda context, v : context.faker.ascii_company_email())
# example uses use of faker text generation alongside standard text generation
fakerDataspec = (dg.DataGenerator(spark, rows=data_rows, partitions=partitions_requested)
.withColumn("name", percent_nulls=1.0, text=PyfuncText(name_generator , initFn=initFaker))
.withColumn("name2", percent_nulls=1.0, template=r'\\w \\w|\\w a. \\w')
.withColumn("payment_instrument", text=PyfuncText(cc_generator, initFn=initFaker))
.withColumn("email", text=PyfuncText(email_generator, initFn=initFaker))
.withColumn("ip_address", text=PyfuncText(ip_address_generator , initFn=initFaker))
.withColumn("faker_text", text=PyfuncText(text_generator, initFn=initFaker))
.withColumn("il_text", text=dg.ILText(words=(1,8), extendedWordList=my_word_list))
)
dfFakerOnly = fakerDataspec.build()
display(dfFakerOnly)
from dbldatagen.
Related Issues (20)
- Typo in readme
- Improve speed and coverage of unit tests HOT 1
- Improve template text generator
- Cast exception when nested schema passed as input HOT 6
- Build fails due to changes to build runner 'ubuntu-latest'
- Distribution functions (and perhaps others) not compatible with Databricks UC Clusters operating in `shared` mode HOT 4
- Setuptools need to include required packages for working with the library locally (outside of the databricks environment) HOT 2
- Improve build ordering dependencies
- Changed interim build labelling to comply with PEP 440 HOT 1
- DblDatagenerator causes global logger to issue messages twice in some circumstances HOT 9
- Allow precision and scale options to apply to any numeric type
- When generating array valued column generation spec, use different random seed for each element
- Support range of values for generating array valued columns
- ArrayType(StringType()) columns result in Null column, doesn't take values from `values` argument HOT 2
- Reference Previous Rows HOT 3
- Add support for constraints
- Add support for structured columns HOT 1
- How to set template and min,max value for a nested schema attribute HOT 1
- Document recommended Databricks runtime releases for Unity Catalog
- Using Databricks Labs Data Generator on Databricks Runtime 14.x
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from dbldatagen.