this post was submitted on 01 Dec 2024
58 points (100.0% liked)

Programming

17668 readers
154 users here now

Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!

Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.

Hope you enjoy the instance!

Rules

Rules

  • Follow the programming.dev instance rules
  • Keep content related to programming in some way
  • If you're posting long videos try to add in some form of tldr for those who don't want to watch videos

Wormhole

Follow the wormhole through a path of communities !webdev@programming.dev



founded 2 years ago
MODERATORS
 

No surprise I use python, but I've recently started experimenting with polars instead of pandas. I've enjoyed it so far, but Im not sure if the benefits for my team's work will be enough to outweigh the cost of moving from our existing pandas/numpy code over to polars.

I've also started playing with grafana, as a quick dashboarding utility to make some basic visualizations on some live production databases.

all 30 comments
sorted by: hot top controversial new old
[–] ptz@dubvee.org 15 points 3 weeks ago (3 children)

I'm not a data scientist but I support a handful. They all use Python for the most part, but a few of them (still?) use R. Then there's the small group that just throws everything into Excel πŸ€·πŸ»β€β™‚οΈ

[–] Sigma_@lemmy.world 4 points 3 weeks ago

The dplyr pipeline and ggolot tooling is unmatched. Often I mix Python and r to use each for their most optimal

[–] jubilationtcornpone@sh.itjust.works 4 points 3 weeks ago (1 children)

Then there's the small group that just throws everything into Excel

Interesting. Excel is certainly capable enough but I would think data set size limitations would be a frequent issue. Maybe not as frequent as I would have thought though.

Excel kinda chugs when you go over 20MB of data, but once the file is open it works. Sometimes you just need to be patient.

[–] blackbirdbiryani@lemmy.world 3 points 3 weeks ago

R and tidyverse is really amazing, the syntax is so natural I rarely need to check the docs on anything to quickly do basic data transformation/plotting. Definitely more intuitive than pandas (and I learnt that first).

R is my go-to, since that's what my uni taught me (Utrecht university). But I've been learning pandas on python on the side for the versatility (and my CV).

[–] tiddy@sh.itjust.works 6 points 3 weeks ago (2 children)

Ive had surprising luck with Godot for basic things, complimenting it with rust or opengl for higher performance

[–] rutrum@lm.paradisus.day 10 points 3 weeks ago (1 children)

How do you use Godot for data science?

[–] tiddy@sh.itjust.works 6 points 3 weeks ago* (last edited 3 weeks ago)

Mostly for visualisations, but having a standardised reference for 2d and 3d transforms has come in handy too.

Admittedly, visuals aside, rust does most of the mathematical heavy lifting

Edit to note I'm not employed in data science, so I have a lot more wiggle room for things to go wrong

[–] agelord@lemmy.world 7 points 3 weeks ago (1 children)

Could you please elaborate further on how you're using Godot in data science?

[–] tiddy@sh.itjust.works 2 points 3 weeks ago

Probably should have elaborated more in the original comment, but essentially I'm not a professional so the freedom of creating custom UI + having some standard variable structures like 2d and 3d transformations are worth it.

It also has a python-eqsue language, good build in ide, documentation, generic GPU access, and most importantly personally is extremely cross platform.

Mostly visualisations though, with rust doing the actual legwork

[–] magic_lobster_party@fedia.io 4 points 3 weeks ago

Java with Spark.

Although I feel like I’m doing less of data science and more of data processing.

[–] driving_crooner@lemmy.eco.br 4 points 3 weeks ago (1 children)

Not a data scientist, but an actuarie. I use python, pandas in jupyter notebooks (vs code). I think it would be cool to use polars, but my datasets are not that big to justify the move.

[–] rutrum@lm.paradisus.day 0 points 2 weeks ago

If it works, don't fix it!

[–] SplashJackson@lemmy.ca 3 points 2 weeks ago (1 children)

I like pandas but sometimes figuring out the simplest of shit is so complicated

[–] rutrum@lm.paradisus.day 1 points 2 weeks ago (1 children)

I learned SQL before pandas. It's still tabular data, but the mechanisms to mutate/modify/filter the data are different methodologies. It took a long time to get comfy with pandas. It wasnt until I understood that the way you interact with a database table and a dataframe are very different, that I started to finally get a grasp on pandas.

[–] Xraygoggles@lemmy.world 1 points 2 weeks ago (1 children)

Wow, I feel seen. Currently fighting this battle, any tips or resources you found helpful?

I think it's the index(?), aggregation, and order of operations I'm struggling with the most.

[–] rutrum@lm.paradisus.day 1 points 2 weeks ago (1 children)

First off, understanding the different data structure from a high level is mandatory. I would understand the difference between a dataframe, series, and index are. Further, learn how numpy's ndarrays play a role.

From there, unfortunately, I had to learn by doing...or rather struggling. It was one question at a time to stack overflow, like "how to filter on a column in pandas". Maybe in the modern era of LLMs, this part might be easier. And eventually, I learned some patterns and internalized the data structures.

[–] Xraygoggles@lemmy.world 1 points 2 weeks ago

Thank you for taking the time, the perspective is helpful. Same answer as everything else then.

Just be more stubborn than the problem. =)

[–] thr0w4w4y2@sh.itjust.works 2 points 3 weeks ago

jupyter notebooks, or if you’re super trendy give zerve.ai a try

[–] odium@programming.dev 2 points 3 weeks ago

data engineer, not scientist here. Mostly Python and pyspark for me.

[–] verdeviento@mander.xyz 2 points 3 weeks ago (1 children)

What do you enjoy/find beneficial about polars?

[–] rutrum@lm.paradisus.day 5 points 3 weeks ago (1 children)

Its a paradigm shift from pandas. In polars, you define a pipeline, or a set of instructions, to perform on a dataframe, and only execute them all at once at the end of your transformation. In other words, its lazy. Pandas is eager, which every part of the transformation happens sequentially and in isolation. Polars also has an eager API, but you likely want to use the lazy API in a production script.

Because its lazy, Polars performs query optimization, like a database does with a SQL query. At the end of the day, if you're using polars for data engineering or in a pipeline, it'll likely work much faster and more memory efficient. Polars also executes operations in parallel, as well.

[–] Kache@lemm.ee 1 points 3 weeks ago (1 children)

What kind of query optimization can it for scanning data that's already in memory?

[–] rutrum@lm.paradisus.day 6 points 2 weeks ago (1 children)

A big feature of polars is only loading applicable data from disk. But during exporatory data analysis (EDA) you often have the whole dataset in memory. In this case, filters wont help much there. Polars has a good page in their docs about all the possible optimizations it is capable of. https://docs.pola.rs/user-guide/lazy/optimizations/

One I see off the top is projection pushdown, which only selects relevant columns for a final transformations. In pandas, if you perform a group by with aggregation, then only look at a few columns, you still perform aggregation across all the data. In polars lazy API, you would define the entire process upfront, and it would know not to aggregate certain columns, for instance.

[–] Kache@lemm.ee 1 points 2 weeks ago* (last edited 2 weeks ago) (1 children)

Hm, that's kind of interesting

But my first reaction is that optimizations only at the "Python processing level" are going to be pretty limited since it's not going to have metadata/statistics, and it'd depend heavily on the source data layout, e.g. CSV vs parquet

[–] rutrum@lm.paradisus.day 1 points 2 weeks ago

You are correct. For some data sources like parquet it includes some metadata that helps with this, but it's not as robust at databases I dont think. And of course, cvs have no metadata (I guess a header row.)

The actually specification for how to efficiently store tabular data in memory that also permits quick execution of filtering, pivoting, i.e. all the transformations you need...is called apache arrow. It is the backend of polars and is also a non-default backend of pandas. The complexity of the format I'm unfamiliar with.

[–] milicent_bystandr@lemm.ee 2 points 2 weeks ago

I only dabble, but I really like Julia. Has several language and architecture features I really like compared to python. Also looks like the libraries have been getting really good since last I used it much.

[–] AA5B@lemmy.world 1 points 2 weeks ago

Anyone have any good pointers to DevOps resources or strategies? My data scientists keep stating that they need different approaches to ci/cd, but never seem to have actual requirements other than wanting to do things differently. I’d really like to offer them an easy way to get what they need while also complying with company policy and industry best practices, but it doesn’t seem to have any real differences