MacMusic  |  PcMusic  |  440 Software  |  440 Forums  |  440TV  |  Zicos
python
Search

Making Python faster won’t be easy, but it’ll be worth it

Wednesday April 2, 2025. 11:00 AM , from InfoWorld
Want to make a Python user grind their teeth? Just recite three words: Python is slow.

In many of the ways that matter, it’s true. “Pure” Python, without external libraries written in C, is nowhere near as fast at computation or object manipulation as C or C++, or Java, or Rust, or Go, or … well, the list goes on.

Python users have long addressed this problem by performing end runs around it. Want faster math? Use a math library like NumPy or Numba, or compile your code to C with Cython. These external solutions do get the job done, and a whole subculture of Python revolves around using them well.

But shortcuts like these always come at a cost. With NumPy, the price is writing code that’s more abstract and less granular than what could be expressed in other languages. With Numba (and in some cases Cython), it’s only having a small subset of the language to express what you want to do at full speed.

Across the board, users clamor for the same thing: Can we make Python natively faster, right out of the box? And over the years, a simple answer has taken shape: “Maybe, but it’s hard.”

Why making Python faster isn’t easy

Python’s performance has little to do with being an interpreted language, as opposed to one compiled ahead of time to native machine code. The biggest obstacles to performance revolve around something baked into the language at its most fundamental level: its dynamism.

If you assign something a name in a language like C++ or Rust, assumptions about the language’s behavior provide strong guarantees about what type of thing you’re dealing with. The compiler can take advantage of that consistency and generate fast code to handle it.

In Python, there are no such guarantees—a named thing can be pretty much anything. Every single time a developer wants to do something with that named thing, they have to look it up and figure out what is possible. Many common optimizations simply can’t be done.

Why Python type hints won’t save us

Python’s recently added type hinting system doesn’t really help here, either. It’s only intended to be used as an ahead-of-time linting tool, not a compile-time or runtime optimization tool. And that’s by design: The whole point of Python type hints is to allow developers to lint code ahead of time for correctness, while still preserving Python’s underlying dynamism.

So why not create dialects of Python that use type hints to generate optimized code? Well, such things do exist: Cython is one of the most common examples. But dialects only yield sizable speedups when dealing with pure machine types. The minute you start using Python’s object-based types, like lists or dictionaries, you’re forced to call into the CPython runtime to manage them, and then you’re back to Python’s conventional level of performance.

And again, this approach misses the point. We shouldn’t have to go outside of Python to speed things up. So that begs the question: What can be done inside Python?

How Python is getting faster now

Over the last few versions of the reference version of the Python interpreter, CPython, a slew of proposed changes—some near-term, some further out—started landing. These proposals provide the first concrete hints of how Python’s developers plan to speed it up from within.

The specializing adaptive interpreter tries to take advantage of the relative stability of object types in given regions of code. If a given operation uses the same types reliable, the general bytecodes to perform those operations can be swapped at runtime for type-specialized ones, thus avoiding some additional lookups.

One of Python’s big alternative implementations, PyPy, uses a just-in-time compiler (JIT) to speed up Python. PyPy generates machine-native code where it yields actual performance improvements, but this comes at the cost of it being an entirely separate implementation of Python, with all the maintenance overhead, bugs, compatibility issues, and so on that come with that. Native JIT techniques for CPython started landing recently, but they don’t yet yield significant performance boons—they’re meant to lay the foundation for future improvements over time.

Another major newly introduced change is an alternate build of CPython without the Global Interpreter Lock, or GIL. This mechanism synchronized activity across multiple threads in Python, at the cost of not having true threaded parallelism in Python. The GIL-less builds open up a world of multithreaded performance improvements, although it’s still considered experimental.

These are just a few of the concepts being implemented, and many more are in the pipeline. What’s clear from this mosaic of techniques is that no one thing will magically unlock faster speeds in Python. It also shows that the path to making things faster is to figure out, whenever possible, how to make the interpreter do less—avoid type checks, for instance, or perform less reference counting.

Why faster Python must come from within Python

Before this recent wave of optimizations started landing, a common line of thought went like this: Instead of making Python itself faster, or even making a new Python runtime (PyPy), why not make a new language that’s highly compatible with Python, and gradually transition Python users over to that new language?

That’s more or less the approach fronted by projects like Mojo. Mojo offers a Python-like syntax, and even a degree of backward compatibility with existing Python. But unlike Python, it compiles by default to native machine code.

Still, the language isn’t a full drop-in compatible replacement for Python, and where it falls back to the rest of the Python world for compatibility, it loses its performance edge. It will be an uphill battle for any language—Mojo or any other project like it—to achieve the full ecosystem compatibility, broad acceptance, and critical mass Python already has.

That means Python ought to become its own best replacement. This goal might only be possible to reach incrementally, one piece at a time. But that iterative process allows the existing Python community to migrate with the language, instead of starting over from scratch elsewhere. Some pieces (such as no-GIL Python) will be bigger than others (like type specialization). But what matters is that the innovations keep coming. Then it’s up to the larger Python community to leverage them and raise all boats.
https://www.infoworld.com/article/3855600/making-python-faster-wont-be-easy-but-itll-be-worth-it.htm

Related News

News copyright owned by their original publishers | Copyright © 2004 - 2025 Zicos / 440Network
Current Date
Apr, Thu 3 - 10:30 CEST