What Has Static Typing Ever Done For Us, Anyway?
Bolting static type systems onto dynamic languages is all the rage, these days. TypeScript, of course, has long since taken the “what if JavaScript but not JavaScript?” crown. Even Ruby—the lucky-duckiest of loosey-goosey dynamic languages—is bending in that direction. The motivations are understandable: these languages began as scripting languages, but now they’ve become the primary languages for a vast amount of application development at scale. The anything-goes ethos of loosely-typed dynamic languages that hit the perfect cost/benefit balance for dropping a bit of “dynamic HTML” into a website twenty-five years ago starts to lose its luster after the first 100,000 lines of code or so.
Wherefore art thou static typing?
The benefits for development teams are straightforward, and compelling enough, on their own terms: statically-typed code is self-documenting (a huge boon for code completion and documentation tools) and static type checking eliminates a swath of low-hanging runtime bug fruit, along with a perceived need for a lot of low-value tests to guard against those failure modes. The strictures of the typing system can enforce or encourage more thoughtful top-down architecture and code design. Psychologically, embracing typing is very helpful for breaking out of an unhelpfully defensive mindset when writing code: what if someone passes a completely unexpected value to this function? Well, they just can’t, if that would entail a different type. Their problem.
The downsides are equally straightforward.
Typed languages are manifestly less expressive than untyped languages; what you lose in expressiveness, you gain in what you can prove about your code and its behavior. Clawing back some of that expressiveness makes the typing more complicated, both in theory and in practice. This is the problem we run into when we fall back into the duck typing mindset and “just want to do something simple” like map a function over an array of objects, plucking a property from each: now all of those things (“array”, “object”, “property”, “map”, “function”) fall within the type checker’s jurisdiction, and the type checker only likes what it can trivially prove to be correct, given what we’ve told it or what it can infer. Enter interfaces, protocols, traits, generics, sum types, type guards, and/or whatever other type system constructs a given language features for polymorphism.
It’s not a slam-dunk psychologically, either. Static typing is a big mental shift. Not as big as, say, going from object-oriented programming to functional programming, but big. It’s natural to fight against the type system’s strictures, to struggle to regain the flexibility, expressiveness, and apparent simplicity of the underlying dynamic language. For every programmer who immediately embraces the guarantee that a well-typed function prevents arbitrary types from being passed as arguments, there’s another programmer who valiantly persists in writing verbose, overloaded functions to “safely” allow arbitrary types to be passed as arguments. This can actually result in more brittle code than the untyped equivalent, and instill a false sense of progress, to boot.
The biggest downside—since it can insidiously and silently undermine every benefit while exacerbating every cost—is that optional type systems, by their very nature, leak like a sieve. A key principle of type checking is that the entire code base is one unified type universe. If you can’t, in principle, trace a value and its type through every possible caller and code path, verifying that the types are compatible at each junction, then it’s not static type checking—that’s just type annotations. That’s why the simple map function turns out not to be so simple to type correctly: our function’s implementation may be isolated and encapsulated, but the types inside that function are not. Enter the any
type, a way to drop back down to dynamic typing on an ad-hoc basis.
An any
type is the type system equivalent of the principle of explosion: it allows the type checker to prove anything, and thus prove nothing. Its use in one file might not automatically infect your entire code base, but if widespread it absolutely will entail the static typing reducing to little more than hit-or-miss code completion. Untyped contagion is not an unmanageable risk, but I wonder how many teams do so successfully, or even take it seriously in the first place? There are probably a lot of legacy internal JS libraries out there with .d.ts
files chock full of any
. Caveat typtor, one might say, but considering the resources and energy being sunk into adopting these systems, one might also hope for the actual end value proposition to be a bit less contingent.
Amusingly, if you went back to 2010 and told a conference of web developers that the consensus view in 2021 was basically that JavaScript should have just been Java from the get-go, you’d be laughed off the stage. The situation we find ourselves in is highly path dependent. The way we know that JavaScript, if it were to be re-invented today, would be nowhere near as loosey-goosey with types is because people are constantly re-inventing JavaScript today, almost always with some amount of type-safety as a core motivation. To be clear, I do see, over yonder, a Big Static Type Rock Candy Mountain coming into view. I’m just not convinced that the current muddle is a worthwhile step forward to that promised land.
I love static typing. It’s the best.
I do a lot of my “recreational” programming in Haskell, which is both intimidatingly strictly-typed, on one hand, and also a hotbed of experimentation for type system expressiveness, on the other. I heartily recommend Haskell to programmers of any experience or skill level who want an in-your-face demonstration both of the power of type systems and their shortcomings, along with all the ways those can be addressed via opt-in extensions (each, of course, with their own shortcomings.) What it boils down to is that you can’t extend a type system so far that it becomes as expressive as the untyped (any
) lambda calculus without losing all of the benefits of the type system. There is always a balance that must be struck between what you want to express in your code and its types, and what the type checker can practicably prove. This entails some deep insights into the nature of programming, languages, and types.
Something else that programming with Haskell can highlight is how responsibility for correctness extends well beyond even the most expressive type system. Extremely robust and battle-tested standard libraries, such as Control.Monad
, almost always come with caveats: an interface (called a “type class” in Haskell) implementation (“instance”) is valid if its functions satisfy not only the required type signatures, but also some other conditions outlined, informally, in the documentation. Often this means that the results of combining the functions in various ways should have some invariant property, such as associativity or distributivity. It is left up to the programmer to ensure that an implementation conforms. Rejoice, for this is a version of the full-employment theorem for programmers.
There’s also an applicable cultural lesson, in the existence of a function called unsafePerformIO
. The short story there is: “Don’t use unsafePerformIO
”. The longer story is: “No, seriously. Don’t use unsafePerformIO
.” It’s the any
of the Haskell world: the ultimate escape hatch from a miserable situation. It’s the only way to run an IO
action without binding it to main
(which itself is the only way for app code to safely run an IO
action) because it is the only Haskell function that can actually perform side effects before returning. If you peruse the Haskell standard library, you will, in actual fact, see unsafePerformIO
used a fair bit. This is because the usual “safe” use of IO
—binding to main
at the top-most level—is itself implemented atop unsafePerformIO
. In other words, unsafePerformIO
isn’t intrinsically unsafe, but the onus is on the programmer—the library programmer who writes the unsafePerformIO
call, not the library user—to “convert” it to a safe use. This is, to put it mildly, a very high bar.
Haskell programmers are incredibly ornery about unsafePerformIO
and maybe three people in the world are allowed to use it without a mandatory psychiatric evaluation. any
isn’t nearly as unsafe as unsafePerformIO
, and in fact may be just the magic ticket needed to get code that should work dammit over the hump, and there’s the rub. You can say it’s not “best practice” all you want, but so long as the easiest, most surefire way to get serviceable code working and deployed is to turn off type checking and revert to dynamic typing, then you can take it to the bank that that is exactly what’s going to happen, a lot.
Of course, it’s absolutely possible to write entire libraries without ever opting out of type checking. That’s literally every library ever written in a real statically-typed language. I wouldn’t be surprised, at all, if the big TypeScript projects don’t have a single any
anywhere in their source, or if they do it’s surrounded by FIXME
flags and links to outstanding TypeScript issues, and carefully isolated from the rest of the well-typed code. I also wouldn’t be surprised, at all, if there are also plenty of small-to-medium-sized projects that have any
sprinkled maybe not everywhere, but here and there… only as required to quickly fix a critical bug, or in dubious contributions pushed upstream from a major corporate user, perhaps. Corporate SWE orgs jumping on the TypeScript wagon en masse because a Senior Vice President read an article in BusinessWeek? Forget about it. For every perfectly any
-free open source library there are, assuredly, scores if not hundreds of apps, deployed to production, that are absolutely drowning in a sea of any
.
And, yet…
Static type systems can enable amazing, nigh-miraculous things. Stream fusion, for example, is a technique for automatically rewriting otherwise 100% natural and conventional list processing code (maps, filters, folds, zips, and so forth) to eliminate, in most cases, the need to allocate intermediate lists. The trivial example is a function that maps over a list of numbers and doubles their values, and a second function that maps over a list of numbers and increments their values by 1
. In the naïve implementation, composing those two functions would allocate a list of doubled numbers, and then a final list of incremented numbers. What stream fusion can do, in so many words, is dissolve the individual loops and recompose the functions on the individual list elements, instead. A huge factor in being able to do such things correctly is static typing: being able to know, purely from the types of the functions, that compositions of those functions can be safely rearranged and optimized, transparently.
That’s just for starters. With a language like Haskell, you can prohibit imports of the IO
type or unsafePerformIO
, and have a high degree of confidence that the untrusted code you’re about to run can’t have much of an effect on the external world—barring a zero day exploit—beyond putting itself into an infinite loop, or eating up memory unnecessarily. If that sounds unimpressive, then you probably haven’t tried to do anything remotely equivalent in vanilla JavaScript. Team up with the most talented JavaScript programmer you know, and try to block him/her from, say, getting access to fetch
, heuristically. Even if you can eventually swing it, you’ll never have the confidence that comes from all access to the external world being promoted to a single type.
Type systems are all about what the type checker can prove, and parlaying that proof into real, concrete benefits. Merely implying or not excluding the possibility of type correctness is, frankly, worse than no type checking at all, since it can seriously lead a team astray, induce significant over-confidence, encourage dubious development practices bordering on cargo-culting, and be very expensive in time, money, and personnel. By all means, if you’re writing a well-isolated, robustly-engineered library from scratch, and have the wherewithal to rule out falling back to dynamic typing here-and-there, when the going gets tough, then don’t let me stop you. If you have a pre-existing and sprawling code and dependency base of mission-critical vanilla JavaScript, and a large team of engineers to match… why on Earth would you want to tee up this particular moon-shot? Best case scenario, you would be making the perfect the enemy of the good… and, in my experience, at the end of the day the perfect never actually turns out to be much of a challenge to the good enough, when it comes to developing and releasing software.
You might say that I’m the one making the perfect the enemy of the good. What’s wrong with opt-in type checking, for those who want to use it, or for the code that’s easiest to adapt? Even a marginal benefit is a benefit, no? Sometimes it’s a benefit, sometimes though it’s just a wrecking ball to your stack, overall code quality, and engineering processes. Static type checking that doesn’t actually give you high confidence—and a low chance of high confidence is actually just all-around low confidence—in its correctness isn’t a benefit of any sort… it’s just code bureaucracy.