this post was submitted on 01 Dec 2024
58 points (100.0% liked)

Programming

17668 readers
150 users here now

Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!

Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.

Hope you enjoy the instance!

Rules

Rules

  • Follow the programming.dev instance rules
  • Keep content related to programming in some way
  • If you're posting long videos try to add in some form of tldr for those who don't want to watch videos

Wormhole

Follow the wormhole through a path of communities !webdev@programming.dev



founded 2 years ago
MODERATORS
 

No surprise I use python, but I've recently started experimenting with polars instead of pandas. I've enjoyed it so far, but Im not sure if the benefits for my team's work will be enough to outweigh the cost of moving from our existing pandas/numpy code over to polars.

I've also started playing with grafana, as a quick dashboarding utility to make some basic visualizations on some live production databases.

you are viewing a single comment's thread
view the rest of the comments
[–] Kache@lemm.ee 1 points 3 weeks ago (1 children)

What kind of query optimization can it for scanning data that's already in memory?

[–] rutrum@lm.paradisus.day 6 points 3 weeks ago (1 children)

A big feature of polars is only loading applicable data from disk. But during exporatory data analysis (EDA) you often have the whole dataset in memory. In this case, filters wont help much there. Polars has a good page in their docs about all the possible optimizations it is capable of. https://docs.pola.rs/user-guide/lazy/optimizations/

One I see off the top is projection pushdown, which only selects relevant columns for a final transformations. In pandas, if you perform a group by with aggregation, then only look at a few columns, you still perform aggregation across all the data. In polars lazy API, you would define the entire process upfront, and it would know not to aggregate certain columns, for instance.

[–] Kache@lemm.ee 1 points 2 weeks ago* (last edited 2 weeks ago) (1 children)

Hm, that's kind of interesting

But my first reaction is that optimizations only at the "Python processing level" are going to be pretty limited since it's not going to have metadata/statistics, and it'd depend heavily on the source data layout, e.g. CSV vs parquet

[–] rutrum@lm.paradisus.day 1 points 2 weeks ago

You are correct. For some data sources like parquet it includes some metadata that helps with this, but it's not as robust at databases I dont think. And of course, cvs have no metadata (I guess a header row.)

The actually specification for how to efficiently store tabular data in memory that also permits quick execution of filtering, pivoting, i.e. all the transformations you need...is called apache arrow. It is the backend of polars and is also a non-default backend of pandas. The complexity of the format I'm unfamiliar with.