Some thoughts on Redux

It would be hard to overstate the importance of Redux to the early explosion in popularity of React. The two quickly became nigh inseparable—like peas and carrots. Redux’s emphasis on immutability and the dead simple action-reducer pattern meshed seamlessly with React’s functional reactive programming and one-way data flow. To such an extent, I might even argue, that it became something of an obstacle to a proper conceptualization of React, the component tree, and front end application architecture in general. The two combined to craft a seductive pitch that it was most appropriate to view your app as one giant component function, fed from the top by one giant piece of immutable application state.

This simply isn’t true in practice, of course, and structuring your components to optimize re-renders against store changes will quickly result in many quasi-independent component trees, firmly shattering the mirage of a 1-to-1 relationship between the top-level state and UI rendering. Edge cases like ghost children—where a child component may re-render from a store update before the parent component, when it is in fact destined to be removed entirely after the parent has re-rendered—further muddle the conceptual symmetry between Redux and React.

Still, the abstraction was so beautiful that the combination of React+Redux was omnipresent until relatively recently, and of course it still has many adherents.

In retrospect, however, Redux kind of greased its own wheels a bit too much. The written-in-concrete ‘best practice’ of having a single store per app—which itself was instantiated from a static set of reducers known at build time—pretty much forced that symmetry on somewhat dubious motivations. After all, there’s no reason in principle why a single React app can’t be a combination of two completely orthogonal sub-applications, each with its own independent store. There are architectural patterns which would encourage exactly this, “micro-frontend” being just one.

An early confusion that this ‘best practice’ induced was the conflation of “one store” with a “single source of truth.” Early React/Redux developers will recall running battles over the correct way to integrate Redux with react-router. The resolution of that concern was, in essence, to disabuse ourselves of the falsehood that single source of truth means having a single library interface, and that as long as the router was the source of truth for routing and Redux was the source of truth for general application state, there was no single compelling reason to integrate or synchronize them. Yet another crack in the illusion of a 1-to-1 symmetry between the Redux store and UI.

More recently, other state management libraries have largely abandoned the notion that one giant store is necessary or proper, and “single source of truth” is returning to its traditionally understood meaning of avoiding duplicative logic or state. This has its own dangers, and the decentralization of external state management these libraries enable could itself work against the SSOT principle. Still, the principle of orthogonal concerns tells us that we should be able to add a piece of shared state to our app without necessarily having to impact on any existing code, and libraries like Valtio or Zustand allow exactly that.

At the end of the day, the only really compelling Redux pillar that remains standing on rock solid foundations is the supremancy of the immutability of data structures. This is what allows us to optimize our component renders, and ensures consistency across all the consumers of a given piece of state. It goes without saying that this is a guarantee that new state management libraries continue to rely on, and even make more robust—for instance via use of proxy facades for update tracking. (Redux reducers, on the other hand, are famously prone to bugs where an object or array is mutated in place, by accident.)

My top-five mental tricks

I’m in the middle of reading Reframe Your Brain, by Scott Adams, and it has inspired me to write up some of my own strategies for self-improvement and coping with life’s inefficiencies. Without further ado, here are five of my own “reframes” that have served me well, and could even be life-changing in the right circumstances.

Avoid temptations by slightly delaying them

How often do you have a craving you know you’ll immediately regret? Maybe something small, like Friday night Dominoes—but you’re down to your last $20 and you’ll eat it too quickly and feel sick, anyway. Maybe something bigger, like an expensive gaming PC you’ll barely touch after the first few days. It could even just be the next sugary snack from the kitchen cupboard.

Something that has worked very well for me is—firstly—not to judge or dwell on the craving. Instead, I set a time in my head when I am allowed to have whatever it is I’m craving. For the snack that might be the next half or quarter hour. Pizza delivery: an hour or so in the future. Major purchases might get put off until pay day.

I give myself permission without judgement because there’s a very good chance that the craving will have passed by the appointed time. An additional rule I might apply is that I have to start over with a new time if I miss the appointed time and the craving comes back. Often, I’ll not even notice and the craving will simply pass out of mind.

I’m not anxious, I’m excited

I forget where I learned this trick, except that it ultimately came from an actual psychology study. The key insight is that, physiologically, anxiety and excitement are the exact same thing. The study participants were tasked with an anxiety-inducing activity, such as public speaking. Before taking the stage, they clapped their hands and announced to themselves: “I am excited.” This simple mental shift had profound effects on their performances.

I have found this trick to be useful even for generalized anxiety. If I’m working on a project and start to feel anxious, instead of allowing it to negatively affect my work, I pause, clap, and tell myself that what I’m actually feeling is excitement. It can help to have some concrete reasons already in mind for why you should be excited, if needed.

I’ve even used this with some success to get over my anxiety of flying. For me, it’s mostly the take-off (and its lead-up) that causes anxiety, and I’m fine once we’re in the air. So, I tell myself that I’m not anxious—I’m excited to get to the good part of the flight. It’s always good enough to get me into the seat, at least.

The 5-year regret horizon

This one is very simple. Anything that happened more than five years ago simply did not happen. Missed opportunities, professional setbacks, friendship-ending arguments: if it was more than five years ago, it has now rolled off your emotional credit report.

The moment something pops into your head from college and you begin to ruminate, visualize a large STOP sign, remind yourself that it happened over 5 years ago, and refocus on something more recent. Odds are, whatever that is won’t get its hooks as deeply into you, and your mind will be free to move onto more productive thoughts.

The only goal is to start

One of my most counter-productive instincts ever has been to set benchmarks at the outset of trying to form a new habit. It feels good to have a schedule in mind (when getting started running/jogging, for instance) of so-many minutes or so-many miles a day. That schedule itself feels amazingly like progress, but it is the exact opposite!

Take a wild guess how many days in the first month I felt up to running for a solid 45 minutes before I had even laced up my sneakers. Funny enough, it did not take very long at all until you couldn’t have dragged me off the treadmill once I got warmed up. The days that I actually got on the treadmill and then tapped out at 10 minutes were few and far between. There was zero correlation between the amount of motivation I felt beforehand and how much energy I had once I got started.

Again: all the schedule and my benchmarks did was make it less likely that I would even get on the treadmill at all. Once I got on the treadmill, the benchmarks had zero to do with keeping me on it. The schedule was 100% a drag on my new fitness habit.

What actually did help was literally anything that made it more likely that I would just get off my ass and start. Clean workout clothes set out with my running shoes. A full water bottle that was easy to drink from when mid-stride, waiting by my keys. A go-to playlist queued up on my phone.

Maximize for just getting started, minimize anything that gets in the way of getting started, and worry about the 45th minute after the 44th minute, and not a second before.

Effective fantasizing for better sleep

Like many people, I used to have an awful time getting to sleep at night. Many nights, my brain just would not shut down. I won’t elaborate further: this might be one of the few truly universal human experiences.

I stumbled upon the strategy that has worked for me quite by accident. Wish fulfilment fantasies, by themselves, weren’t enough to pierce the bubble of anxiety and rumination to calm my brain down. What did the trick was painstakingly fleshing out the details. The more pedestrian the details, the better. I call these exercises “sleep teasers.”

An early sleep teaser of mine was to imagine waking up in the sickbay of the starship Voyager, lost in the Delta Quadrant. This wasn’t just a Mary Sue author-insertion fanfic. Instead, I worked my way through what my first days on the ship might realistically entail: being assigned quarters, having to program the replicators with my favorite junk foods, figuring out how to become a useful and valuable new crew member with skills four centuries out-of-date. Absolutely zero space battles or warp core breaches.

The fantastical-but-mundane nature of sleep teasers seems to be important. Another one I’ve used often is to imagine that I’ve proved (or disproved) a famous open mathematical problem. I’m utterly unqualified ever to do such a thing, but I’ve read books on the subject and have a general mathematical education, so it becomes more a game of mathematical fiction: coming up with a means of proof that would sound plausible enough to your average hard science fiction enthusiast. Sometimes these confabulations have prompted questions or personal insights into the real-life mathematics involved, but that’s just a bonus, if it happens.

Playing the lottery can help, if you have some detail-prone but low-key dream that jackpot money alone could enable. What exactly would it take to set up an animal sanctuary? Or to move back to your home town and open a book store on the square? What’s important is a balance between the pleasantly unrealistic fantasy and the mental problem-solving to work through. On the Voyager, I still got to meet and interact with the characters. Having proved the Riemann Hypothesis, I got to bask in the fame and collect my just rewards.

Whichever sleep teaser I settle on any given night, I tend to quickly settle into a pattern, retracing many of the details of the same mental story. This familiarity and routine, I’ve found, leads inexorably to sleep.

What Has Static Typing Ever Done For Us, Anyway?

Bolting static type systems onto dynamic languages is all the rage, these days. TypeScript, of course, has long since taken the “what if JavaScript but not JavaScript?” crown. Even Ruby—the lucky-duckiest of loosey-goosey dynamic languages—is bending in that direction. The motivations are understandable: these languages began as scripting languages, but now they’ve become the primary languages for a vast amount of application development at scale. The anything-goes ethos of loosely-typed dynamic languages that hit the perfect cost/benefit balance for dropping a bit of “dynamic HTML” into a website twenty-five years ago starts to lose its luster after the first 100,000 lines of code or so.

Wherefore art thou static typing?

The benefits for development teams are straightforward, and compelling enough, on their own terms: statically-typed code is self-documenting (a huge boon for code completion and documentation tools) and static type checking eliminates a swath of low-hanging runtime bug fruit, along with a perceived need for a lot of low-value tests to guard against those failure modes. The strictures of the typing system can enforce or encourage more thoughtful top-down architecture and code design. Psychologically, embracing typing is very helpful for breaking out of an unhelpfully defensive mindset when writing code: what if someone passes a completely unexpected value to this function? Well, they just can’t, if that would entail a different type. Their problem.

The downsides are equally straightforward.

Typed languages are manifestly less expressive than untyped languages; what you lose in expressiveness, you gain in what you can prove about your code and its behavior. Clawing back some of that expressiveness makes the typing more complicated, both in theory and in practice. This is the problem we run into when we fall back into the duck typing mindset and “just want to do something simple” like map a function over an array of objects, plucking a property from each: now all of those things (“array”, “object”, “property”, “map”, “function”) fall within the type checker’s jurisdiction, and the type checker only likes what it can trivially prove to be correct, given what we’ve told it or what it can infer. Enter interfaces, protocols, traits, generics, sum types, type guards, and/or whatever other type system constructs a given language features for polymorphism.

It’s not a slam-dunk psychologically, either. Static typing is a big mental shift. Not as big as, say, going from object-oriented programming to functional programming, but big. It’s natural to fight against the type system’s strictures, to struggle to regain the flexibility, expressiveness, and apparent simplicity of the underlying dynamic language. For every programmer who immediately embraces the guarantee that a well-typed function prevents arbitrary types from being passed as arguments, there’s another programmer who valiantly persists in writing verbose, overloaded functions to “safely” allow arbitrary types to be passed as arguments. This can actually result in more brittle code than the untyped equivalent, and instill a false sense of progress, to boot.

The biggest downside—since it can insidiously and silently undermine every benefit while exacerbating every cost—is that optional type systems, by their very nature, leak like a sieve. A key principle of type checking is that the entire code base is one unified type universe. If you can’t, in principle, trace a value and its type through every possible caller and code path, verifying that the types are compatible at each junction, then it’s not static type checking—that’s just type annotations. That’s why the simple map function turns out not to be so simple to type correctly: our function’s implementation may be isolated and encapsulated, but the types inside that function are not. Enter the any type, a way to drop back down to dynamic typing on an ad-hoc basis.

An any type is the type system equivalent of the principle of explosion: it allows the type checker to prove anything, and thus prove nothing. Its use in one file might not automatically infect your entire code base, but if widespread it absolutely will entail the static typing reducing to little more than hit-or-miss code completion. Untyped contagion is not an unmanageable risk, but I wonder how many teams do so successfully, or even take it seriously in the first place? There are probably a lot of legacy internal JS libraries out there with .d.ts files chock full of any. Caveat typtor, one might say, but considering the resources and energy being sunk into adopting these systems, one might also hope for the actual end value proposition to be a bit less contingent.

Amusingly, if you went back to 2010 and told a conference of web developers that the consensus view in 2021 was basically that JavaScript should have just been Java from the get-go, you’d be laughed off the stage. The situation we find ourselves in is highly path dependent. The way we know that JavaScript, if it were to be re-invented today, would be nowhere near as loosey-goosey with types is because people are constantly re-inventing JavaScript today, almost always with some amount of type-safety as a core motivation. To be clear, I do see, over yonder, a Big Static Type Rock Candy Mountain coming into view. I’m just not convinced that the current muddle is a worthwhile step forward to that promised land.

I love static typing. It’s the best.

I do a lot of my “recreational” programming in Haskell, which is both intimidatingly strictly-typed, on one hand, and also a hotbed of experimentation for type system expressiveness, on the other. I heartily recommend Haskell to programmers of any experience or skill level who want an in-your-face demonstration both of the power of type systems and their shortcomings, along with all the ways those can be addressed via opt-in extensions (each, of course, with their own shortcomings.) What it boils down to is that you can’t extend a type system so far that it becomes as expressive as the untyped (any) lambda calculus without losing all of the benefits of the type system. There is always a balance that must be struck between what you want to express in your code and its types, and what the type checker can practicably prove. This entails some deep insights into the nature of programming, languages, and types.

Something else that programming with Haskell can highlight is how responsibility for correctness extends well beyond even the most expressive type system. Extremely robust and battle-tested standard libraries, such as Control.Monad, almost always come with caveats: an interface (called a “type class” in Haskell) implementation (“instance”) is valid if its functions satisfy not only the required type signatures, but also some other conditions outlined, informally, in the documentation. Often this means that the results of combining the functions in various ways should have some invariant property, such as associativity or distributivity. It is left up to the programmer to ensure that an implementation conforms. Rejoice, for this is a version of the full-employment theorem for programmers.

There’s also an applicable cultural lesson, in the existence of a function called unsafePerformIO. The short story there is: “Don’t use unsafePerformIO”. The longer story is: “No, seriously. Don’t use unsafePerformIO.” It’s the any of the Haskell world: the ultimate escape hatch from a miserable situation. It’s the only way to run an IO action without binding it to main (which itself is the only way for app code to safely run an IO action) because it is the only Haskell function that can actually perform side effects before returning. If you peruse the Haskell standard library, you will, in actual fact, see unsafePerformIO used a fair bit. This is because the usual “safe” use of IO—binding to main at the top-most level—is itself implemented atop unsafePerformIO. In other words, unsafePerformIO isn’t intrinsically unsafe, but the onus is on the programmer—the library programmer who writes the unsafePerformIO call, not the library user—to “convert” it to a safe use. This is, to put it mildly, a very high bar.

Haskell programmers are incredibly ornery about unsafePerformIO and maybe three people in the world are allowed to use it without a mandatory psychiatric evaluation. any isn’t nearly as unsafe as unsafePerformIO, and in fact may be just the magic ticket needed to get code that should work dammit over the hump, and there’s the rub. You can say it’s not “best practice” all you want, but so long as the easiest, most surefire way to get serviceable code working and deployed is to turn off type checking and revert to dynamic typing, then you can take it to the bank that that is exactly what’s going to happen, a lot.

Of course, it’s absolutely possible to write entire libraries without ever opting out of type checking. That’s literally every library ever written in a real statically-typed language. I wouldn’t be surprised, at all, if the big TypeScript projects don’t have a single any anywhere in their source, or if they do it’s surrounded by FIXME flags and links to outstanding TypeScript issues, and carefully isolated from the rest of the well-typed code. I also wouldn’t be surprised, at all, if there are also plenty of small-to-medium-sized projects that have any sprinkled maybe not everywhere, but here and there… only as required to quickly fix a critical bug, or in dubious contributions pushed upstream from a major corporate user, perhaps. Corporate SWE orgs jumping on the TypeScript wagon en masse because a Senior Vice President read an article in BusinessWeek? Forget about it. For every perfectly any-free open source library there are, assuredly, scores if not hundreds of apps, deployed to production, that are absolutely drowning in a sea of any.

And, yet…

Static type systems can enable amazing, nigh-miraculous things. Stream fusion, for example, is a technique for automatically rewriting otherwise 100% natural and conventional list processing code (maps, filters, folds, zips, and so forth) to eliminate, in most cases, the need to allocate intermediate lists. The trivial example is a function that maps over a list of numbers and doubles their values, and a second function that maps over a list of numbers and increments their values by 1. In the naïve implementation, composing those two functions would allocate a list of doubled numbers, and then a final list of incremented numbers. What stream fusion can do, in so many words, is dissolve the individual loops and recompose the functions on the individual list elements, instead. A huge factor in being able to do such things correctly is static typing: being able to know, purely from the types of the functions, that compositions of those functions can be safely rearranged and optimized, transparently.

That’s just for starters. With a language like Haskell, you can prohibit imports of the IO type or unsafePerformIO, and have a high degree of confidence that the untrusted code you’re about to run can’t have much of an effect on the external world—barring a zero day exploit—beyond putting itself into an infinite loop, or eating up memory unnecessarily. If that sounds unimpressive, then you probably haven’t tried to do anything remotely equivalent in vanilla JavaScript. Team up with the most talented JavaScript programmer you know, and try to block him/her from, say, getting access to fetch, heuristically. Even if you can eventually swing it, you’ll never have the confidence that comes from all access to the external world being promoted to a single type.

Type systems are all about what the type checker can prove, and parlaying that proof into real, concrete benefits. Merely implying or not excluding the possibility of type correctness is, frankly, worse than no type checking at all, since it can seriously lead a team astray, induce significant over-confidence, encourage dubious development practices bordering on cargo-culting, and be very expensive in time, money, and personnel. By all means, if you’re writing a well-isolated, robustly-engineered library from scratch, and have the wherewithal to rule out falling back to dynamic typing here-and-there, when the going gets tough, then don’t let me stop you. If you have a pre-existing and sprawling code and dependency base of mission-critical vanilla JavaScript, and a large team of engineers to match… why on Earth would you want to tee up this particular moon-shot? Best case scenario, you would be making the perfect the enemy of the good… and, in my experience, at the end of the day the perfect never actually turns out to be much of a challenge to the good enough, when it comes to developing and releasing software.

You might say that I’m the one making the perfect the enemy of the good. What’s wrong with opt-in type checking, for those who want to use it, or for the code that’s easiest to adapt? Even a marginal benefit is a benefit, no? Sometimes it’s a benefit, sometimes though it’s just a wrecking ball to your stack, overall code quality, and engineering processes. Static type checking that doesn’t actually give you high confidence—and a low chance of high confidence is actually just all-around low confidence—in its correctness isn’t a benefit of any sort… it’s just code bureaucracy.

React Development Rules: Part 1

1. React is not an architecture

As modern libraries go, React has remarkably minimal API surface area. This is greatly to its credit: it handles a small set of closely-related concerns and avoids the weird mixture of egoism and insecurity of do-everything frameworks. It’s a toolbox starter set, instead of a custom-built multi-tool. This can be confusing for developers used to the guardrails of frameworks that come with neat buckets for all their code—controllers here, data entities there—and an all-encompassing metaphor for how those pieces fit together.

I once worked with an engineer who believed, rather perversely, that Ruby being so thoroughly and dynamically object-oriented meant that the lessons of the last 40 years of object-oriented development no longer applied. You didn’t need the principle of interface segregation when you could just use metaprogramming to change the interfaces on the fly; and global function spaghetti was no longer global function spaghetti if the functions were written, laboriously, as classes with a single call method. His code was some of the worst I’ve ever dealt with, not because he was stupid, but he overfocused on maximizing what he could do, and let himself be convinced that, in this brave new world, none of the old wisdom—mostly concerned with should instead of could—was relevant.

“Everything is a component” is to React as “everything is an object” is to Ruby: trivially and misleadingly true. The error is in seeing this as an end, when it’s actually a beginning: closer to a language extension than an easy-bake app toolkit. Like many libraries or frameworks, React’s quick out-of-the-box successes seem to bely the emergent complexity of real-world projects at scale. Simple apps or proofs-of-concepts can often be built out without much thought for any additional structure, creating the impression that no additional structure is required. (That’s usually the case, generically: if libraries didn’t automatically reduce complexity and make certain tasks easier on the local scale, at least, then they wouldn’t see any use at all.) The avoidable conceit is that because React’s simple and clean design requires only one kind of sprocket, then one kind of sprocket is all that you or your application should have.

It’s incumbent on the developer to anticipate the need for additional structure, and to resist the illusory simplicity of “a component is a component is a component.” A Ruby application that consists of just subclasses of Object without any architectural patterns will devolve into code spaghetti quickly, and so will an application that uses React without drawing any distinctions between different flavors of components. That entails, for instance, drawing a meaningful distinction between something like a generic Button component, on one hand, and a top-level Signup screen, on the other. It extends to seeing built-in helpers like the useContext and useReducer hooks from the same perspective: as low-level tools that could be splattered like semicolons all across your codebase, but probably shouldn’t be, if they’re not contributing to good software architecture and code design.

Ultimately, the simplicity of React itself is not a get-out-of-engineering-free card. On the contrary, the onus is on developers and teams to implement generally applicable patterns and principles that will serve us just as well in every future project, and which can be quickly grokked, in the abstract, by new team members.

2. React is a library for creating user interfaces

Zoom out too far and this becomes a bit tautological: in one sense, your entire front-end app can be seen as just a user interface for your back-end API. On its own terms, however, React is handling a familiar set of concerns for the front-end: rendering/compositing graphics primitives, and event-driven user input. In other words: it’s a view layer. Logic bound up too tightly in a view layer can be hard to reuse and is notoriously difficult to test, and the interface itself can be subject to dramatic changes, so it’s important to constrain the scope of your components’ responsibilities in much the same way as you would a custom UIView in a native iOS application.

No, that doesn’t mean building out parallel hierarchies of “view controllers” or such, but it does mean that care should be taken to ensure that data manipulation, domain and business logic, and so on, are appropriately separated and abstracted from the user interface code. Common lower-level patterns like Redux go a long way here, but there aren’t any intrinsic guards against just falling into the habit of treating your application and your user interface state as one-and-the-same, and without a modicum of care they can end up just as tightly coupled as inlined logic would be.

Furthermore, the idea isn’t that application logic should be separated from the user interface because you might change the entire underlying system (i.e., swap out React for Ember) on a whim, but because the two concerns themselves should be easy to change on their own terms. This is exactly the same as maintaining a common, generic API backend that can support scripting, a web interface, and multiple native apps. They’re not separated so that they can be completely replaced on a whim, they’re separated so that they can evolve—in ways small or large—with minimal knock-on effects for other systems.

A simple rule of thumb is that if code isn’t managing presentation or handling events, then it probably shouldn’t be baked into a React component. That said: what happens at runtime is potentially a different story. The crucial point is that your code design should prioritize separation of concerns, but ultimately every program is a final, hard coupling of its substructures and dependencies; the fact that a library like react-redux, for example, ties everything back together into a meta-structure of React components isn’t the issue, the issue is when you sacrifice the ability to separate your application code from React, at any level, even conceptually, you’re increasing the carrying cost, and sacrificing flexibility, extensibility, and reliability, for no good reason.

3. Don’t wage a battle against HTML and CSS

Fact is: the vast majority of apps that use React will never run on any platform other than HTML and CSS in a current web browser. If we exclude React Native apps, that can just be taken to be 100%. Ignoring the finer points of HTML and CSS, or trying to abstract them away entirely, serves little purpose, but can lead to a lot of extra work and maintenance costs.

Even developers who weren’t around for the early days of the Web when <table> was used primarily for page layout can take the kneejerk aversion to tables way too far, to the point of re-implementing <table> as a raft of components, consisting of a nest of <div>s or <span>s, and a raft of finicky, potentially buggy custom CSS. It’s really quite simple: if your data or interface is tabular, then you can just use a <table>, that’s what it’s there for. If you want to lay things out in a grid that aren’t actually related to each other two-dimensionally (i.e., the rows and columns aren’t semantically meaningful and orthogonal) then, duh, don’t use a <table> for that. Not only does <table> handle most of your markup and positioning for you, it’s also more readable, and the built-in semantics are used by browsers and screen readers to enhance accessibility, automatically, without finnicky ARIA configuration.

It ought to go without saying, but any UI designers on the dev team should also know HTML and CSS and be familiar with the default behavior and known limitations, so that they’re not pushing UI embellishments that are easy to whip up in Sketch in lieu of proposals that would work just as well but with a much simpler implementation. Trying to abstract the developers themselves—so that the designers’ Sketch files can be treated, purely on their own terms, as an abstract, gold-standard hand-off from one team to another—is far worse, and leads to an explosion of complexity, byzantine UI hacks, a general breakdown in code quality, and immense developer frustration.

Key to succesful application development, with or without React, is knowing your materials, and having a sense of what is high-value-for-effort innovation and what is low-value development hell (custom scroll bars, custom drop shadows, pretty much anything else that can be summarized as “a custom-built version of some browser-provided functionality so that it can be styled slightly more flexibly.”) Finding the right balance between abstractionism and reductionism can sometimes be tricky, but that, much more than knowing the syntax of any given programming language, is what developers do.

4. SVG icons don’t need to be React components

On the flip side, the fact that React works by taking XML-style syntax and using it to render and update a DOM tree can be irrelevant implementation detail and a costly red herring. If you’re using it for something other than rendering your app’s user interface, then there’s probably a better, more efficient tool, ranging from a simple template string on up to a purpose-specific library like D3. Purity as a development priority is meaningless, here: there aren’t any prizes for unnecessarily baroque solutions that only serve to maintain the comfort blanket of a universal API. (Full disclosure: I’m totally guilty of having written a kludgy port because I didn’t really understand why an interface that made sense in one paradigm didn’t make any sense in the other. More on that another time.)

A particularly common form of React abuse is to look at a set of SVG icons, to see that SVGs are XML documents, to note that React supports rendering JSX to SVG, and then, finally, to convert all static SVGs to React components. Please do not do this: the fact that an image file format’s underlying implementation is XML does not mean you want to be rendering those files as React components. Browsers are perfectly capable of loading and rendering graphics files all by themselves, in several different ways, whether that’s a PNG or an SVG. The underlying XML format is an utterly irrelevant implementation detail, and using the React hammer on that particular nail doesn’t do anything but worsen performance and limit expressive power.

That doesn’t mean there aren’t any uses for generating an SVG document in a component, but if you wouldn’t replace icons/email.png with an EmailIcon component that drew the icon procedurally, with a series of curves and stroke operations, to a <canvas>, then why would you want to do the equivalent to icons/email.svg? Leave your SVG files as SVG files. Even if you think you have a reason not to, look for alternatives.

For example, it’d be much simpler to have 4 static arrow SVGs than to have one ArrowIcon component that takes a direction property and applies a rotation transform to the path. Likewise, for outline or filled versions: the absolute last thing you want to be doing is tweaking stroke thickness via a component’s props. That way lies madness. Leave the SVG-tweaking responsibilities to Sketch, lest the first change request come in a week asking if a certain icon can have one part be stroked but another part be filled, and then the next one asking if another icon can have two stroke widths, ad infinitum. That doesn’t sound so bad? Well, now we have separate stroke and fill versions of icons, but only for half of them—React can handle that, right? Don’t fall into this trap.

Maybe handling SVG icons as generic image assets seems less flexible in terms of what you can do with any given specific icon, but that’s a good thing, because, in the general case, you don’t want that flexibility, and it is ultimately illusory: in 99.9% of cases, there’s nothing you can do with a per-icon React component and stylesheet that can’t be handled in the icon SVG files themselves, as a graphic design concern, using a real vector graphics editor. Furthermore, the same set of icons ought to be usable universally, in other applications, email templates, or promotional materials. Do yourself a favor, and make less work for yourself by leaving SVG files as SVG files.

Vertical Coupling: The Silent Killer

A perennially popular topic in software engineering circles is “monolithic” architectures and how to fix them. Architecture being a bit of a misnomer, since often the problem is the lack of any architecture. The way the story goes is that you have some 10-or-so-year-old web application. Over the years your faithful steed has grown to include so much interrelated functionality that the gears of progress are grinding to a halt with each additional line of code. That single application handles your onboarding, billing, content, messaging system, gamification, third-party integrations, and many additional systems, which are coupled in often insidiously mysterious ways.

There is more than one strategy for dealing with this sort of situation. The most popular is probably despair. Another is microservices, which involves splitting all that functionality into separate, logical applications.

This can be a huge improvement. It can also be an excuse to get bogged down by a ton of technology-creep and research expeditions. What it often isn’t is a change in architecture. It is hard to pull off the microservices gambit successfully without decoupling and cleaning up a ton of code, this is true. Many of the benefits that accrue, however, tend to be from simple refactoring and physically enforced separation of concerns. Often times, the individual services end up looking much the same as the original app, just cut down to size and cleaned up a bit. To be clear: this isn’t a bad thing. Nor is it a universal experience. What it also isn’t, necessarily, is a different architecture.

Perhaps I’m splitting hairs, here. If someone wants to talk about reducing horizontal integration as an architectural concern, that’s perfectly fine—the end result is still better software. Where I wish there were greater attention given is the architectural problems of vertically integrated code, and how to resolve them. I actually have to give a tip of the hat to Rails, here, because this actually was recognized as a problem, fairly early on. “Skinny controllers, fat models” is a good example of a “best practice” that evolved out of an effort to reduce vertical coupling in the application stack. Of course, those “fat models” are a problem all their own.

What do I mean by vertical coupling? Well, I work mostly with APIs on the server-side of things, often with the Grape framework. It is incredibly common to see—and not just in a project using Grape—the following concerns combined into a single class: routing/transport, conditional authorization, business logic, database access, serialization, and data shaping. There are probably others. These are all things your app needs, but by collapsing them to a single point the complexity can explode enormously.

Imagine a set of finite state machines—each one can be delightfully simple and together they can be combined in useful and efficient ways. If, however, you decide to take three state machines that are always used in conjunction and just smoosh them together into a single state machine, you’re almost guaranteed to create a huge, unmaintainable mess with a massive increase in the number of states and transitions. That’s what happens when you combine all those concerns together in one place: the number of states that code represents jumps enormously, making it harder to reason about, harder to change, harder to test, and harder to have confidence in.

What we want to do, ideally, is separate at least some of those concerns and move them behind architectural boundaries.

For example, Grape is great at describing the shape of your external HTTP API—your routes, verbs, and parameters. Forget for a second that some users shouldn’t be able to access every route. There should be, in principle, a sort of API platonic ideal that just describes every possible way HTTP can be used to interact with your app. You should also be able to describe that form without hardcoding in your ORM classes or business logic. Of course, a bunch of HTTP endpoints by themselves is pretty useless. The thing is, the fact that you access your app via HTTP isn’t really an integral aspect of your app. It might be an unchanging aspect, but what your app does is what your app does whether it’s accessed via HTTP, websocket-based messaging, a command line, or some hypothetical future protocol.

It’s right there in the name: Application Programming Interface. An interface is an abstraction. An interface is a mechanism for separating concerns while ensuring they can continue to be composed and used together. There are actually two interfaces at play, here. There’s the super obvious one where you’re defining endpoints that a browser or app can hit. There’s also the more pure interface of your application business logic and its underlying domain. Squashing the two together is a mistake, because it robs you of the key advantage of the second kind of interface: the ability to vary its implementation.

Why would you want two implementations of your application? You don’t. The thing to remember is that we’re talking about two implementations of an interface, not two implementations of an application—the larger part of your application should itself be encapsulated appropriately. The actual concrete implementations of its public interface(s) should be only a small part of your total application code.

OK, why would we want two implementations of the interface, then? Well, one reason could be to provide one implementation that is limited in functionality, for unauthenticated or unprivileged users, and a second implementation that implements the entire interface. Now, access control can depend only on which implementation gets injected into the external, web-facing interface, in a single location, rather than sprinkling potentially brittle and conflicting authorization guards throughout the HTTP API. Think of how much easier it would be to reason about and test your access control when you take HTTP routing and requests out of the picture entirely.

This is obviously a light sketch of one possibility, but hopefully the advantages are clear. Sometime soon I’ll have a second post, about the practicalities of following this approach with Grape, along with concrete examples.

Frameworks: Friend and Foe

Perhaps the central issue in software design/architecture is the appropriate handling and constraint of complexity, both up-front and as the software grows. To that end, there are many frameworks, in whatever your chosen language, which have reducing or eliminating complexity at their core purpose. They’ll call it “being opinionated” or “convention over configuration” or some other catchphrase, but they all boil down to, essentially, the notion that taking work and responsibilities away from the developer is the same thing as taking away complexity. This is true, up to a point.

If it weren’t true, no one would ever use any of those frameworks. (Except for, perhaps, Java developers.) The problem is that complexity is a bit like trying to grip a balloon in your fist—squeeze parts of it with your fingers, and other parts will get pushed out wherever there is room. Over the long-term, complexity tends to out, and the harder you try to squeeze it down the more likely it is to explode. By letting the framework lull you into complacency, and ignoring the complexity at the beginning, you’re risking a descent into a lifetime of add-on, callback, and middleware hell to eek out even small improvements in functionality.

This isn’t to say that it is all doom-and-gloom. A good framework can help you get off the ground running. A good developer can use his or her preferred framework judiciously, and can keep themselves from falling into an inescapable black hole of coupling app code ever-more tightly to the framework. A framework is like a giant ox, in that they’re both tools that can really help you out in the short-term, and also really hurt you for the long term, if you don’t approach them with a healthy respect for their dangers as well as their advantages.

People who know my opinions on the matter are often surprised when I tell them that I use ActiveRecord in certain projects. The difference between how I use it and how many large Rails projects use it is that my “models” are a few lines long and consist almost entirely of relationships. I use it as an ORM, and only as an ORM. Any actual business logic is elsewhere, and ActiveRecord is simply a convenient DB querying and data access tool. Because those classes are only used to save and load data—and not, say, to handle the logic of moving money between accounts, or calculating reports, or syncing to Redis, or any number of things that can be crammed into an AR subclass—they’re purely an architectural boundary. On the app side, they’re wrapped by a bridge class with an interface derived from the app and business logic, not from ActiveRecord. As a result, there are zero unit tests that involve AR in any way, the app code can be comprehended and maintained without caring about AR, and systems design decisions like which database storage system is most efficient for different pieces of data can be made almost completely independently of the software engineering.

The tough part, conceptually, is seeing these as good things. Something I’ve heard frequently is “well, we use Rails, so why shouldn’t we use a Rails feature if it’s there?” The “use it because it’s there” impulse is a strong one. This is where the framework making things really easy at the beginning starts to come back to bite: a framework is almost always going to have an easiest-possible solution for something. StackOverflow will have 20 users fighting to be the first to tell you what that solution is, and to claim that their solution is “best practices.” Part of being a software engineer, or a programmer if you prefer, and not a code monkey is not automatically forfeiting those decisions, and instead taking a step back and looking at the effect on your app as a whole. If that simple, “best practices” solution involves turning a 900 line file into a 901 line file, then it’s worth considering what brought things to that point, why that one file does so much, what happens the next time any of that needs to change, and just how confident you are that it actually works as intended.

What is a Design Principle?

Engineers love design patterns. They make our jobs easier, because they provide us with a lot of time-tested, proven ways of solving common problems. We also love design principles, which help guide good code design. Confusing the two is a common, and unfortunate, mistake that can lead an engineer down a deep well of poor code design that can be hard to escape. We all know a principle when we see one (Single Responsibility Principle, etc), but what, exactly, is a design principle?

A design principle is a top-down, goal-directed, descriptive heuristic.

It’s top-down, because design is top-down: high level concepts and systems are recursively decomposed into smaller systems and finally into classes. That doesn’t mean starting with a complete spec, but it does mean that, whatever your methodology, the end result needs to make sense from that perspective. If your end result has no high-level design then the process has failed.

It’s goal-directed, because it’s in service of external goals and not a goal in-and-of itself. Those goals are many but include: ease of testing, long-term maintainability, reusability, and velocity. A design principle that makes any one of those things worse is less than useless.

It’s descriptive, because it describes a likely ideal outcome without necessarily prescribing the means to achieve it. The difference is between giving someone enough information to reliably identify a cow, and that person trying to reconstruct a cow from that description without seeing one in advance.

It’s a heuristic, because it’s a rule of thumb, not a law. Heuristics help us get into and stay in the ballpark, by cleaving off unlikely or obviously unpalatable alternatives. Once there, though, they don’t help you identify the right answer. Occam’s razor, for instance, is a heuristic that is often abused. It says that the simplest explanation should be preferred to others. The point isn’t that any given simple explanation is necessarily more correct than a more complex one, it’s that simple explanations are preferable as a practical matter to more complex ones (and note they need not be mutually exclusive,) and, more importantly, as a basis of inquiry they’re likely to prove more fruitful.

So when applying a design principle, keep these things in mind:

The Monty Hall Problem

The Monty Hall problem is a classic example of a provable, simple truth that runs entirely counter to our expectations. When it was widely distributed in Parade Magazine in 1990, thousands of readers—many very well educated, including scientists—wrote in to vigorously disagree with the proferred conclusion. Yet, computer simulations have (trivially) proven the extremely intuitive solution to be incorrect.

The problem is simple, and is based on the game show “Let’s Make a Deal.” A game show host presents you with three doors. Behind two of the doors is a goat. Behind the third door is a new car. You’re invited to select one of the three doors. After you pick your door, the host opens one of the other two doors to reveal a goat. He then gives you the option of switching to the door that you did not pick and that he did not open. Should you switch?

The answer is yes—your odds of winning the car are doubled if you switch. The intuitive answer is “it doesn’t matter”—there are two doors, one of which definitely has a car behind it, so if you were to flip a coin and pick one you’d have a 50% chance of picking the right door. That answer is provably incorrect. So, what’s going on here?

Let’s step back and take another look at how this could go. You’re back in front of the doors, which have been reset. This time, after picking your door (let’s say you pick door C) the host offers to let you either stay with the door you picked, or get both of the other doors (A & B). If the car is behind door A or door B, then you win it. Should you switch?

Of course you should. It’s plainly evident that two doors give you twice as many chances to get the car. With one door, you only have a 33% chance of having picked the car. There’s a 66% chance the car is behind one of the other two doors. Now, remember again that there is only 1 car, and 2 goats. That means that at least one of the two doors definitely has a goat behind it. The question to consider is: are the odds different if the goat is behind door A rather than door B? You know for an absolute ironclad fact that one of them is a goat, so does it matter which door has a goat?

Let’s say you switch. At this point, your choice is locked in. The host reminds you that behind one of the two doors there is definitely a goat, and asks what you think your odds of winning are. You tell him that you’re twice as likely to win as not win, since you have 2 chances to get the car. Then the host says “What if I were to tell you that behind door A is a goat, what are the odds now?”

There remains, of course, a 66% likelihood of winning the car.

There was always going to be a goat, behind either door A or door B (or both, of course.) And the host was always going to tell which one had the goat, regardless of whether it was door A or door B. You have received absolutely zero new information that would affect your odds of winning the car.

This scenario is completely identical to the original formulation, where the option to switch is given after the goat is revealed. There is no new information, you always knew there was a goat, and you knew the host was going to show you one. The key to all of this, and what makes it counter to our intuitions, is that the door opened to reveal the goat wasn’t chosen randomly. The host was never going to open your door, even if it held a goat. So, even though there are now two unopened doors to choose between, the odds aren’t equal because the two sets of doors were treated differently.

If a complete stranger were to come across the set and see the three doors with one already open to reveal a goat then it would be a coin flip for that stranger—because they don’t know which door you initially picked. That extra information is what tips the odds in your favor if you end up switching.

Haskell & XCode Part I: Using Haskell In Your Application

I’m working on an experimental graphics app that delegates a lot of functionality (including user-scriptability) to (mostly) pure functional code, written in Haskell. To be clear, the point here isn’t to “write a Mac app in Haskell.” Instead, my Haskell code consists of certain domain-specific operations on data structures. Transformed data is returned to the main app, to be interpreted as appropriate.

There are two main problems to solve: (1) integrating the Haskell part of the app in the first place and (2) exchanging structured data between Swift and Haskell. This post is Part 1, and I’ll discuss marshaling data across the boundary in a later post. The first step turns out to be pretty simple, in contrast to the impression given by an article on the official Haskell wiki on the subject. This post covers the process for an Application target. Framework targets are a bit different, and will be covered in Part 2.

The Haskell Code

Haskell integration with other languages is based on the Foreign Function Interface (FFI.) The FFI handles translating/calling the external function (or vice-versa.) All we have to do is tell it a little bit about how the function gets called, and what the types are. We’ll start with a very simple function:

module Triple where

triple :: Int -> Int
triple x = 3 * x

I went with triple to avoid any confusion with Double. In order to export this function to be called elsewhere, we have to first include the ForeignFunctionInterface language extension at the top of the file:

{-# LANGUAGE ForeignFunctionInterface #-}

The last thing we need to do is actually export the function. Note that the typing situation can be a little weird, and FFI provides the Foreign.C.Types module with C-specific types. In this case, however, the normal Int type works just fine.

foreign export ccall triple :: Int -> Int

ccall specifies the calling convention, e.g., how to find the function and its arguments in memory. In this and most cases ccall suffices, and tells the compiler to use the C calling conventions. Finally, we simply repeat the function signature. Note that in many cases it will not be this simple to translate a Haskell signature into the C-compatible version. That’ll be covered in more detail in Part 3.

Compiling The Haskell Code

GHC has a gazillion flags… the man page is truly frightening. Luckily we need only to use a handful. The command to compile our simple Haskell file looks like this:

ghc --make -dynamic -shared -lHSrts -lffi -O -o triple.so triple.hs

There’s a lot going on here:

Add Files To The XCode Project

The compiler will output four files. The only two we’re interested in are triple.so and triple_stub.h. Add those to your project. Ideally you would just add references to the files, so that modifying and recompiling the Haskell source won’t necessitate any copying or re-adding of files later.

Configuring The Header Files

In your bridging header file, add #import "triple_stub.h". If you don’t have a bridging header, you can just create a new .h file and name it projectname-Bridging-Header.h.

triple_stub.h includes HsFFI.h, which is part of the core GHC libraries. We have to tell XCode where to find the header file, via the project inspector. Under the “Build Settings” tab, find the “Header Search Paths” setting and add the location of the GHC includes directory. On my system that directory is:

/Library/Frameworks/GHC.framework/Versions/Current/usr/lib/ghc-7.10.2/include

but your mileage may vary.

Final XCode Build Settings

triple.so should have been added automatically to “Linked Frameworks and Libraries,” at the bottom of the “General” tab for the app target. If not, add it now.

This isn’t enough to actually make the library available, though, so we have to tell XCode to copy it via the “Build Phases” tab. Click the “+” at the top and add a “New Copy Files Phase.” Set the “Destination” dropdown to “Frameworks.” Now, drag triple.so into the file list for the new build phase.

Setup Complete!

Now that we have our project set up, we can call our function from our Swift code just like any other C function that is automatically bridged for us. The only caveat is before we call a Haskell function we have to call hs_init to setup… I’m not entirely sure what it sets up. There is a corresponding ‘hs_exit’ function to call when we’re all done with Haskell.

func applicationDidFinishLaunching(aNotification: NSNotification) {
  hs_init(nil, nil)
  NSLog("\(triple(4))") // => 12
  hs_exit()
}

Sending and receiving more complex data will require a bit more work, and I’ll cover that in a separate post.

Fewer Tests Are Better Than Brittle Tests

Tests shouldn’t have to be changed or updated all that often. If they are, then they’re getting in the way of what tests are supposed to help us achieve: high velocity, effortless refactoring, code maintenance, etc. High test churn is an indication that something is wrong with either the testing methodology or the code design. The proximate causes are legion: lots of stubbing/mocking, large numbers of dependencies, spaghetti classes, testing glue code, high level (integration) tests masquerading as low level (unit) tests, and so on. This is a separate issue from keeping tests DRY. If your helper modules or shared contexts are churning, then that’s likely as much a smell as if you have to constantly rewrite the tests themselves.

There are three main kinds of problems, in my experience:

Testing The Wrong Thing

It’s really easy to test things you shouldn’t, especially if library glue/boilerplate code makes up a significant fraction of your app. There’s sometimes an insistence on exhaustively testing “our code,” even if our code doesn’t actually do anything. Or there might be pressure, internal or external, to write tests just to say you wrote tests. Often this will take the form of testing rote configuration of some framework class, which is a combination of code duplication and testing third-party code. Not only are you probably “testing” something that is liable to change, but you’re quite possibly coupling your test to your implementation, at best, and the implementation of a third party library, at worst.

A very rough rule of thumb is not to write a test if you didn’t actually write a function or method yourself. In those situations where you do feel the need to write a test, then it should be functional: varying inputs and asserting on results, not interrogating and asserting against internal state. A good example might be validations built into an ORM class: testing those validations should be functional, i.e. the validate method should be called with actual valid or invalid data—simply using introspection to check that “this class has a uniqueness validation registered on it” is pointless.

Testing Too Much

If you fall into the mindset that good testing is to throw a veil over the code and rigorously test against any conceivable bug via every single access point, then it’ll be easy to ramp up the quantity of tests you write to an absurd level. This can result in a lot of test churn if the things you’re overtesting end up changing—and they probably will. For example, you might write a bunch of tests that verify logic for a method that simply forwards its arguments elsewhere. Test logic present in the class, method, or function. Don’t test delegated logic.

For instance, if you have a method that does some sort of computation, and another method that composes that method:

def calculate_tax(product, state); end;
def tax_for_order(order); end;

Then tests for tax_for_order shouldn’t be testing that individual taxes were calculated properly. The tests for calculate_tax handle that. A good rule of thumb is that if you find yourself testing more than one thing for a given method/function, or testing the same thing across multiple test subjects, then you’re either testing logic that is elsewhere or logic that should be elsewhere. How applicable the rule is will vary based on how vital the thing you are testing is, whether it’s public vs. private, whether it’s part of an interface that client code might use, etc. In general, though, well-written code will have simple, single-issue tests. In this example, tax_for_order might initially look like this:

def tax_for_order(order)
  if @states_we_tax.include?(order.state); end
end

Now you’re testing at least two things: (1) Whether we even charge tax on this order, based on the state and (2) What the tax for the order should be. Code that is more cleanly tested might look like:

def order_subject_to_tax?(order); end;

def tax_for_order(order)
  if order_subject_to_tax?(order); end
end

(An even worse initial version might be something more like @states_where_we_have_warehouses.include?(order.state).)

Testing Poorly Designed Code

There’s nothing wrong with mocking, stubbing, test doubles, etc. However, too much mocking, or stubbing in low-level unit tests, can oftentimes be a code smell. Having to mock or stub a lot is a strong indication that a class is too tightly coupled, either to its dependencies or because the class combines a lot of responsibilities. If you have to stub the class you’re testing itself, then something has gone horribly wrong. If you’re stubbing or mocking some internal method, then you’ve hit on something that should be in another class in the most direct and obvious way possible.

Too much mocking/stubbing can be caused by a class having too many dependencies. Having many dependencies is, furthermore, an indication that your class is doing too much. Often this’ll be paired with large methods that tie everything together. One of the chief benefits of testing is its ability to highlight larger-scale design problems: if it sucks to test something, it’s probably poorly designed. Being at a loss for how to test something, or even just really not looking forward to it, is a strong indication that you should be refactoring, not papering over the problem with painful, complex tests.

Conclusion

None of these problems are peculiar to any particular testing methodology. However, if you’re encountering them while ostensibly practicing TDD then you should step back and reconsider how much you’re actually letting the tests drive the code. Actually writing tests first is a key part of TDD, of course, but putting the tests first is, in my opinion, both more important and often overlooked entirely.

How I Learned to Stop Worrying and Love TDD

TDD can mean many things—from simple ‘test first’ practices focused mainly on integration or acceptance tests, all the way down to a highly granular, line-by-line, red-green-refactor methodology. I, personally, am not a TDD purist. I’ll use TDD in some circumstances, and take a more relaxed approach in others. Generally speaking, the more concentrated and encapsulated the functionality the more likely I am to use TDD. Try to use TDD with highly diffuse code, and I’m likely to freeze up and suffer from design paralysis. I also picked up some poor testing practices by osmosis in my early years that I’ve had to mindfully and aggressively prune. The more I prune, the more effectively I find I can use TDD, and the more I find myself actually using TDD. In hindsight, a lot of my aversion to TDD over the years is traceable to my own bad habits and misconceptions—stumbling blocks and speedbumps that can slow the TDD process to a halt.

The simplest and first to go was the problem of not actually letting tests drive the design—rigidly imposing a design from the start and then expecting TDD to magically and readily produce a well-written, reliable implementation seems to be a common practice. Once it becomes obvious that different forms are more—or less—readily tested, it’s a simple matter of making those patterns the default—composition over inheritance, dependency injection, encapsulation, minimal side-effects. This isn’t the be-all-and-end-all, but not only is your code easier to test but almost as a direct consequence it’s also much more well-designed. DHH might call this “design damage,” but I call it “Testability Driven Design”—generally speaking, if your code isn’t testable then it’s badly designed, and if it’s well designed then it will be easily tested. If you have to invent new disparaging terms to justify the poor testability of your code, well… good luck.

For me, the second wrong idea about testing to go was the intuition of the well-specified unit. I’m not sure how prevalent this is, but for the longest time I labored under the belief that each class should be a black box, and its interface tested exhaustively. It didn’t matter if the class itself had no logic of its own, and simply incorporated functionality tested elsewhere. Taken to an extreme, this will be an obvious absurdity. The problem is that it often is taken to an extreme. Plenty of Rails devs will write tests that actually just exercise ActiveRecord, rather than their own code, in the belief that they need to exhaustively specify everything, right down to the automatically provided attribute readers. I believe the origin here lies in libraries which encourage much blurring of boundaries—the harder it can be to tell where app code ends and library code begins, the more one will instinctively ‘play it safe’ by over-testing.

This may be more or less controversial, but in my view a unit test should test only logic, and only logic that is present in the unit itself. “Logic” being code whose behaviour will vary depending on the input. Attribute accessors are not logic—simply testing for their presence is code duplication, and they should instead be exercised by higher-level tests (if they’re not, then they’re not used elsewhere in the code and so why do they exist?) Taking arguments and directly passing them to an injected dependency is not logic—that’s glue code. Unit tests aren’t black box tests—you don’t have to suspend knowledge that certain functionality isn’t actually implemented by the unit.

This leads to the third wrong idea, which is that the main purpose of a test is to prevent as many potential future bugs as possible. Preventing bugs is a benefit of testing, but it is not the purpose: well-designed, well-functioning, maintainable code is the purpose. Focusing on preventing bugs will lead directly to pathological testing, including code duplication, testing third party code, and their degenerate case: testing implementation detail. The classic example is the sort of tests encouraged by the ‘shoulda’ gem. If you’re writing a unit test for an ActiveRecord model that ultimately asserts that a validation is present on the model by checking the model’s validations array—please stop. You’re just duplicating your code and tightly coupling your unit tests to third party code for zero reason. “But what if I accidentally delete that validation?” one might ask. Tests aren’t there to verify that you wrote certain code—they’re there to verify that the logic works correctly. Those aren’t the same thing. If you want to verify a validation, then somewhere in your code it should be tested by varying actual inputs. If your test doesn’t ultimately depend on some input somewhere changing, in all likelihood you’re just duplicating your code, or testing someone else’s code.

No doubt someone will disagree with my interpretation of TDD in light of the above, but once I started to shed these habits my willingness to use TDD and my velocity while doing so skyrocketed. I was no longer dreading the arduous task of writing tons of pointless tests just to make sure every line of code or potential ‘contract’ was covered. I no longer felt that adding a new class automatically meant a bunch of TDD boilerplate. Focusing all my tests on logic meant far fewer breakages when irrelevant glue code changed or low-value code was moved around. I still don’t use TDD all the time, but I find myself shying away from using it out of fear of “design paralysis” less and less.

ActiveRecord is Actively Harmful

This was originally titled “Introduction to ROM: Part I,” but seeing as it focuses almost exclusively on AR and Rails, I’ve decided to rework it into a post specifically about ActiveRecord, with a separate series focusing exclusively on ROM. I’ve retitled this post to reflect the topic more accurately.


Yesterday I was pointed to a comment thread for a blog post titled “Five More ActiveRecord Features You Should Be Using.” The features themselves were some of the usual suspects when it comes to AR anti-patterns: lots of coding by side effect (lifecycle callbacks, change tracking, etc). The interesting thing to me was what happened in the comment thread.

First, @solnic mentioned the suggestion that you use the after_commit lifecycle callback to automatically kick off an update to Redis when the database model is updated, and remarks “great, you just coupled your AR model to Redis, every time you persist it, it needs Redis.” He doesn’t say that the goal—synced data—is bad, merely that the implementation is introducing significant coupling. In reply, @awj says:

There can be great value in having secondary data stores continuously kept in sync with primary data changes. There also can be value in not doing this. Stating that either is unequivocally a “bad practice” is little more than cargo cult system design.

Holy leap of logic, Batman. That’s some underpants-gnome thinking… “Don’t use A to implement B because that method increases coupling” does not imply “Don’t implement B.” At first I was angered by what I considered to be dishonest debating tactics, but after thinking about it for a while, I’ve come to realize that it most likely results not from dishonesty, but from a constrictive mindset that a developer, steeped in Rails and ActiveRecord culture, will almost inevitably adopt.

Within the Rails and AR world, whenever good coding practices are pitted against “Rails Way” mantras like DRY and various “best practices”—not to mention expediency—the good coding practices almost always lose. The fact is, there is no good way to implement that sort of automatic syncing between database and Redis that is both well-coded and compatible with the “Rails Way.” To a certain kind of “Rails developer,” the only way to resolve the dissonance is to adopt logic like “Saying I shouldn’t couple is the same as saying I shouldn’t implement my feature—” because when you’re wedded to Rails and ActiveRecord, that is in fact exactly the case.

ActiveRecord—both as it is implemented and as it is used—is a big driver of the culture that insists that tightly integrated code and side-effect driven logic is necessary and desirable. On its surface, it purports to be a powerful and easy-to-use database access layer. Developers like it because they don’t have to do anything to use it—its ease of use right from the start of a project is legendary. Unfortunately, these benefits are illusory. The fact is, ActiveRecord induces insane amounts of coupling in your app and severely restrict developer freedom down the road.

ActiveRecord is Full of Anti-Patterns

How does ActiveRecord lead to coupling? Let me count the ways. The simplest is the globally accessible interface—such as being able to call where on any model from anywhere—which can lead to app code littered with knowledge of the database schema, not to mention that every class has complete unfettered access to your entire database. Named scopes aren’t much of an improvement. How many named scopes look like this:

scope :with_user_and_notes, -> { includes(user: :notes) }
scope :newest_active, -> { where(active: true).order('created_at desc, id desc') }
scope :red_color, -> { where(color: red) }

Not only does this barely count as syntactic sugar, but they still expose details of the database and remain available globally, as always. The global is still a significant problem—more semantic scopes would be either completely inflexible or forced to incorporate business logic (those will be some fun tests) to be useful in different circumstances. Other bullshit “best practices” like “thin controller, thick model” lead to monster model classes full of business logic—pretty much the definition of tightly coupled code:

def publish_change_to_clients!(type)
  return unless topic

  channel = "/topic/#{topic_id}"
  msg = {} # actual message value cut for length

  if Topic.visible_post_types.include?(post_type)
    MessageBus.publish(channel, msg, group_ids: topic.secure_group_ids)
  else
    user_ids = User.where('admin or moderator or id = ?', user_id).pluck(:id)
    MessageBus.publish(channel, msg, user_ids: user_ids)
  end
end

What does code to send data to a client have to do with persisting a Post to the database? Beats me. The model class this method came from is over 600 lines long. Everything that this class does—and it is a lot—is more brittle and less maintainable for it.

Less obviously, the one-table-one-model approach couples your business domain and your database schema, which is sometimes fine but often not. I’ll put that another way: a business domain and a database design aren’t mirror images of one another—but Rails and ActiveRecord assume (and insist) that they are. As if that weren’t bad enough, by having so completely obliterated the distinction developers are universally encouraged to view the database as an extension of their Rails app, with schema changes and migrations directly correlated with changes to the app. The idea that your database is completely isomorphic to and a part of your app is sheer folly, but it is almost Rails gospel.

The Database is Not Your App

The fundamental principle at play here is that of the architectural boundary—the place where your app and another system or concern interact with one another. Architectural boundaries aren’t necessarily large, but the larger ones are pretty obvious and important: database, file system, network connectivity, in-memory store, etc. They’re boundaries because from the perspective of your app what lies on the other side is not important—the file system could be real or a mock and your app does exactly the same things. The database could be SQL, NoSQL, or flat files and your app has to use the data in the same exact ways and eventually output the same exact updated data. Conversely, the less agnostic your business code is toward whatever is on the other side of a boundary, the more tightly coupled it is and the weaker the boundary.

If you’re having trouble accepting that your app shouldn’t care about what database you have on the other side of the boundary, consider this: Imagine a world where SQL is an obscure, relatively new and untested technology and NoSQL is the default, go-to data storage solution. Does that change anything about what your app actually needs to do, from a business perspective? Does a single user story change? Does a single formula for calculating some vital piece of data change? No, of course not. On the other hand, how wide is the impact on the code? How many classes have to change, even a little? The best case scenario is only your model classes have to be reworked—but even that alone can be an arduous prospect implicating thousands of lines of business logic.

The idea, again, isn’t that you should care about these things because you might someday replace Postgres with Mongo. The point is your code shouldn’t care about whether its data comes from Postgres or Mongo because it ultimately makes no difference, from a business logic perspective. By making your code care, you are, objectively, making it less valuable in the long-term and increasing maintenance costs, while simultaneously reducing its testability and confidence in any tests. You’re handicapping your code, tying it to irrelevant detail for little to no upside.

The code forming the boundary mediates between two very different worlds—the world of your domain objects and business rules on one side, and the mechanics of data storage on the other. Architectural boundaries are not reducible to a single class wrapping up obscure details of a protocol inside a nice API. Instead, they translate and mediate between your app and the external system, speaking the language of your domain on one side and the language of the external system on the other.

Coupling happens when details cross over the architectural boundary and mold our code in unavoidable ways. This is exactly what happens with ActiveRecord, because ActiveRecord doesn’t actually concern itself with translating between our app and the database—instead it operates from the assumption that your database and your app are the same thing. Rather than translate your business concepts to the database and back again, it simply provides high-level hooks into the low-level boundary not to bridge the boundary, but to erase it.

Side-Effects May Include…

By combining business logic, querying, data representation, validation, lifecycle, and persistence your app is shackled to a single database and persistance strategy, oftentimes encompassing an enormous amount of the application. This unavoidable fact is directly implicated by another part of that comment that initially made me so angry:

If it’s acceptable for that to “need Redis” then that’s what it does. If it’s not, then maybe you work around it. It’s not like you don’t have options to control behavior there.

Essentially what he is saying is that every part of your app should know about how your model depends on and mutates Redis every time it saves a record, in order to decide if it should work around that behavior. Let that sink in. That’s a recipe for the spaghetti-est of spaghetti code. Your code now can’t simply use the data access class to save a record anymore, and if you want to use the interface that it is presenting for the stated purpose you have to have in-depth knowledge of its implementation at each point of use, lest you run afoul of its side effects. That’s insanity—when you save a damn record you should expect the record to be saved and that’s it. Driving your app by side effect makes it incredibly brittle, and simultaneously difficult to change, and the testing situation turns into a complete disaster.

You don’t need to be a FP acolyte to see why it’s bad that your classes that do basic, universal things like saving to the database would be kicking off all sorts of other business logic. Imagine that every time you turned the oven on, everyone in your family got automatic notifications that dinner was in 30 minutes—unless you remember to disable it by removing the face-plate and detaching the transmitter every single time you want to use the oven for something else. We encapsulate functionality because it makes that functionality better, for one, to be isolated. It makes it more easily tested, and it make the logic cleaner. We also encapsulate functionality because we don’t always want to use things in the exact same ways with the exact same collaborators every time. And in situations where we don’t want to use a particular collaborator, we don’t want to have to actively take steps to avoid using it.

Mo’ Responsibilities, Mo’ Problems

One of the driving factors behind the emergence of NodeJS and Javascript on the server is the canard of shared code. Thousands of devs have been conned into working with an excruciatingly bad language on a platform with a plethora of fantastic alternatives, at least partly through the fantasy of common classes on the server and in the browser. While surely there has been some of that—although I suspect it’s mostly just platform-agnostic libraries that get shared and not so much business code—the reality of the situation at the app developer level is often a horror show of non-reusable code, with the same giant, thousand-line god classes encompassing dozens of responsibilities that you’ll find in many Rails apps. What is going wrong such that a community obsessed with DRY and a community obsessed with code portability both end up with these single-use monstrosities, and what does it say about how much we actually value code reusability?

A lot of the blame goes to the libraries that are popular and the patterns they push. Encouraging—or enforcing—inheritance over composition leads to large classes with numerous responsibilities, just as a matter of course. Community pressure or “best practices” combined with laziness can then lead to an explosion of responsibilities, as plugins and developers add more and more code to a handful of classes. Finally, having an artificially limited range of “kinds” of classes a developer believes he or she can have (Model, Controller) leads directly to a parsimony of classes, and indeed a general trend of developer resistance towards adding new classes (maybe because it makes the “models” folder look so messy.)

DRY, an almost religious mantra in Rails circles, boils down to increasing code reusability through refactoring. Unfortunately, that’s fundamentally at odds with the broader development pattern that is encouraged by almost everything else about Rails. In fact, the way DRY is pushed in Rails circles can lead directly to perverse outcomes. To go back to the after_commit hook and Redis example, the obvious alternative to putting that code in a lifecycle callback is to move it to the controller—invoke that completely separate behaviour where and when you want it. Of course, from a wider perspective this isn’t good design, because it does repeat code. The problem is invoking DRY here and hooking into AR makes the code objectively worse, not better. Moving that code into the model reduced repetition, while simultaneously decreasing the reusability of the code.

The massive classes this sort of development process ends up encouraging prevent code reuse through tight coupling from two directions.

From the top-down, the class makes so many assumptions about how it is being used and what it is working with that it can only be used in a handful of ways, if that. If a graphics class internalizes the generation of output files, it’ll probably be difficult to extend it to support other formats. If your models handle their own persistence, it can be nigh-impossible to persist the same model in different ways depending on context. If your model is also where you put data filtering and formatting accessors, then having to provide different views of the same data can lead to an combinatoric explosion of methods. Decisions that were made universally based on initial convenience almost never pan out in the long-term for most use-cases, leading to awkward compromises and workarounds which ever-more-tightly couple the class to its circumstances.

From the bottom-up, the class locks up code that might otherwise be generalizable and applicable elsewhere. Code to handle the peculiarities of graphics file formats could find other uses, were it not buried in a god class’s private methods. Code to run reports on data can be refactored and made more powerful and flexible if it were its own class. One example of something that is successfully and commonly extracted from the AR hierarchy is serialization (via, e.g., ActiveModel::Serializers), exactly the sort of concern that should be treated as a separate responsibility.

Bottom line: there’s an inverse relationship between composability and number of responsibilities. The more responsibilities you pile into a class, the less composable it is, and the less use you’ll get out of your code, on average (which means you’ll write more code, in the long run.) ActiveRecord is a complete failure on both grounds: AR models are increasingly less reusable as time goes on and they grow larger and introduce side-effects, and the code locked within is completely un-reusable right from the start… yet, it’s all still DRY, somehow.

Rails Models Have Many “Reasons to Change”

The Single Responsibility Principle says (spoilers) that every class should have a single responsibility—which is sometimes defined as “a reason to change.” The “reason to change” clarification is useful because too often “responsibility” is conflated with a Rails “resource”—this class is responsible for posts, that’s a single responsibility, right? Well, no. No, it isn’t. Not at all.

Let’s take a look at the responsibilities a Post class has in a Rails app. It loads the schema from its database, so that it knows what attributes it has. It defines the relationships between your models. It provides for querying the database. It performs domain validations on records. It is the data itself, and handles accessing and mutating record data. It persists (create or update as needed) records to its database. And all of that’s without any user code.

Add in things like Paperclip and Devise and the responsibilities explode, before the dev even begins to pile on business responsibilities. What if you want to change how a post is persisted, without changing anything else? Good luck. Want different validations depending on whether the logged in user is an admin? I hope you like duplicated code and hackish workarounds. Persist auto-save drafts to an in-memory store rather than the database? Abandon all hope, ye who enter.

The thing is, when you first start a project or when you start with simple projects and gradually work your way up in terms of complexity, this can look pretty good—of course you don’t want to worry about where a particular model is getting stored, or managing sets of validations. Of course! It “just works” … for now. Eventually, though, all the things that AR makes so easy and simple at first glance will be your “reasons to change”—maybe not today, and maybe not tomorrow, but soon. Then what? If you were like many Rails devs, I imagine you’d simply “work around it” by using other parts of AR that seem to give you “options to control behavior.”

There are strategies to mitigate some of the damage that ActiveRecord can cause. At best, they reduce but do not eliminate the problem. Regardless of efficacy, they are almost never put into practice. The attitude seems to be—if not outright hostility to any alternative—at least a resigned acceptance that one has made his or her deal with the devil. Far too often, the very worst parts of ActiveRecord are enthusiastically embraced and evangelized. And so it goes.

A Tale of Two MVPs

Modern software development is a dense memeplex teeming with patterns, methodologies, and practices that rapidly mutate and recombine with each other in novel, often infuriating ways. “Agile” is famously a term whose meaning is nebulous and ever-shifting, applied to any set of development processes up to and including waterfall. Another is “Minimum Viable Product,” or MVP. Originating with a specific meaning in the Lean Startup movement, MVP has since morphed into a general notion in software development of the small, focused, well-honed feature set of an initial release milestone.

Both ideas are valuable, but problems emerge when wisdom that pertains to one kind of MVP is blindly applied forward to the new, mutated MVP. A great example is this graphic, which I’ve mentioned before and just now stumbled across in an article on software versioning:

While I do not know the exact provenance of this graphic (Update: It’s either created by Henrik Kniberg or directly derivative of his work), I can very confidently state that it originally applied not to software development, but to the original Lean Startup concept of the Minimum Viable Product. I know this because it is a great illustration of the MVP in a Lean Startup, and an absolutely horrible illustration of the MVP in software development.

The MVP in Lean Startup is Concept-First

Lean Startup isn’t a software development methodology—it’s an approach to entrepreneurship and business. The Minimum Viable Product, properly applied, isn’t a trimmed down version of a sprawling—if undocumented—business plan. Rather, it’s the quickest, cheapest, easiest-possible solution to whatever problem the company is tackling in order to make money. In the context of Lean Startup you don’t sit down, dream up “we’re going to be Twitter/Yelp/Google/Vimeo/Foursquare for Quickimarts/Bus stops/Bowling alleys/Dog parks,” excitedly plot out some expansive ecosystem/plan for world domination, and then scale that all back to an MVP. Instead, first a monetizable problem—something for which you can sell a solution—is identified, and then a quick-and-dirty fix is plotted out in the form of the MVP.

The reason the comic makes a lot of sense from this perspective is that you explicitly do not start out with aiming for the moon and then having to adjust your sights downward—there’s money sitting on the table, and you want to start snapping it up as quickly as possible using whatever tools you can. Software development is actually a worst-case scenario, from a Lean Startup perspective, so piecing together a gold-standard system from scratch is understandably discouraged.

Instead, the “skateboard” solution might take the form of something as simple as a Google Sheets/Forms, or Filemaker Pro app. What matters is that you solve the problem and that you can monetize it. The “bike” solution might be a tricked-out wordpress site with some custom programming outsourced to a contractor. In this context, it doesn’t matter one bit that each solution is discontinuous with the next, and is essentially thrown away with each new version—what matters is that money start coming in as soon as possible to validate the idea, and that the solution (as distinct from software) is iterated rapidly.

The MVP in Software Development is Software-First

You’d be hard pressed to find a professional software engineer who’ll respond “Excel spreadsheet” or “Wordpress site + 20 plugins” when asked to describe an MVP. It’s simply a different country. The core idea—do as little work as possible to get to a point where you have something to validate the solution—is retained, but the normal best practices of good code design and project management are still paramount.

As software engineers, we’re not going to half-ass a partial solution to a problem, pretend it’s a skateboard, and then throw that work away to start on the scooter iteration. Instead, an MVP is the smallest app we can write well that is still useful and useable. From there, we iterate and experiment with new features and functionality, taking into account real-world feedback and evolving demands and opportunities.

From a product perspective, we need to take a look at what we mean by minimal and by viable. Minimal, in one sense, means simply stripping things away until you get to a vital core. The tension is with viability—probably the most ambiguous of the three parts of “MVP.” Some take the position that “viable” is in relation to the market… something is “viable” if it can be sold or marketed to users. I argue that “viable” is to be taken in the same meaning as a foetus being “viable,” which is to say it is able to be carried to term. A minimum viable product, then, is the smallest set of features which can conceivably be developed into your product. An MVP should consist of an essential, enduring core of code that will form the foundation for further efforts. Anything else is simply a prototype or proof of concept.

A Concept-First Software MVP Can Be Disastrous

As erroneously applied to software, this can also be called the teleological MVP, or alternatively the function-over-form MVP. This approach emphasizes the principal value we’re delivering for our users rather than how we intend to manifest that in a final product. That value is distilled, and minified, until we have a vision of a product which is significantly less effort than what we actually intend to make. So, if we’re trying to create a sports car, as in the comic, you might interpret the value as “getting from point A to point B more efficiently than walking,” then squeeze that down until you arrive at a bike. Or a skateboard.

The chief problem with concept-first is that the MVP does not exist in a vacuum. Like any tool it has to be measured by how well it helps us achieve the intended results. If you’re developing a sports car—or the software equivalent—then an MVP is only valuable to the extent that it helps you to produce, eventually, a sports car. If you develop as your MVP the skateboard version of a sports car, where do you go from there? The only thing you can possibly do is to throw the skateboard away and start over, or become a skateboard company.

The comic illustrates this process with smiley faces, implying that with each version you have happy users and thus are on the right path. In reality, it shows four product development dead-ends amounting to—in real terms—thousands of hours and potentially millions of dollars of wasted time and effort. Take, for instance, the case of Gamevy, which produced as the MVP for their real-money gambling business a “freemium” fake-money product which—while certainly less effort and time-intensive than their ultimate goal—nearly doomed the company.

Forget The Cutesy Comic

That comic, at the top of this article? It’s about Lean Startup MVPs. It isn’t about developing software. The process it has crossed out as undesirable is actually the approach we want to take to development software—good, well-architected systems composed of ever more generalized, ever more fundamental systems. Don’t program a skateboard when you mean to program a motorcycle—you might turn out “something” sooner than you otherwise would, but it is a false savings and you’ll pay for it very, very quickly.

iOS Design Patterns: Part II

Here are three more iOS development patterns that fly somewhat in the face of answers you might see on Stack Overflow touted as “best practices.” Two of these are rock solid and a third is on probationary status, which I’m throwing out there as a discussion point.

Use structs to define appropriate architectural boundaries

It’s really easy to blur architectural boundaries in an iOS app. That’s partly thanks to the Delegate pattern, which encourages concerns to spread across multiple classes with varying roles and responsibilities. When we lose sight of that boundary between view and controller we inevitably neglect the fact that appropriate and tightly defined boundaries are the backbone of a well-designed and maintainable architecture. An extremely common and simple example of where this can happen is configuring UITableViewCells in a UITableView’s data source.

Often we’ll end up with one of two extremes: either a controller that knows a lot about the internal view structure (directly setting UILabel strings and colors and other blatant Demeter violations), or a view object that takes a domain record (such as an NSManagedObject) and is responsible for translating that high level object into specific pieces of display data, itself. In either case, we have parts of our app that are tightly coupled to things from which they should be insulated. The contagion can easily spread, for instance by moving view-specific display logic to that high level domain record in order to “clean things up.”

Somewhere between tweaking UILabels in your view controller and passing NSManagedObjects all over the place is the sweet spot of just enough data for the view to render itself, with a minimum of logic required to do so. Minimizing logic is a key goal, here—the view system is one of the more complex and opaque parts of an iOS app, and one of the last places you want to have code dependent on high-level semantics, if you can avoid it. Any code in your view that takes a high-level concept and translates it into things like colors, text strings, and the like is code that is significantly harder to test than if it were elsewhere.

Immutable structs are fantastic for exactly this purpose. A struct provides a single value that can be passed across the boundary while encapsulating a potentially unlimited amount of complexity. Immutability simplifies our code by ensuring a single entry point for the configuration logic, and helps keep the logic for generating the struct’s member values in one place. They can be as high-level or low-level as is appropriate given the view and the data. For basic table views that are primarily text I might simply have a struct of String, Bool and UIColor values mapped to each visual element. On the other hand, I have a view for drawing graphs that takes a general description of the graph to be drawn, where there is less of a direct connection between the values I pass and what ultimately gets set as the final configuration.

(In the latter example, the view makes use of other classes to interpret the input and produce the final display values—to what extent you continue to rely on the view-side of things being able to interpret data in complex ways will vary. In my case, the controller “collates” the data into a general form, and the view is responsible for turning that into a renderable form)

In either case, there is one correct way to cross the boundary between controller and view, and provided you keep your view outlets private (as you should) you’ll have confidence that your controllers and views remain both loosely coupled and synchronized in their effects.

Use awakeFromNib as a form of dependency injection

Google around for how to get at your NSManagedObjectContext from your view controllers, and you’ll get two answers:

Set it on your root view controller in your AppDelegate, then pass it to each view controller you present

One downside to this solution is that, at least as of the last project I began, the AppDelegate is no longer necessarily involved in bootstrapping the storyboard. You can get at your root view controller via the window, getting a chain of optionals leading to your controller and then setting your managedObjectContext property, but it is exceedingly slapdash, at best. Another problem is all the laborious glue code involved in ensuring an unbroken chain of passing along the context, bucket brigade style, between your root and any controller that might need it. All of this is in service of avoiding globals, as advocated by the next solution:

(UIApplication.sharedApplication().delegate as! AppDelegate).managedObjectContext

Anytime this solution is mentioned, comments about avoiding globals or Apple having rejected this approach surely follow. In general, yes, globals are bad, for varying reasons (some of which have less to do with “global” and more to do with general pitfalls of reference types.) In this case, the global is bad because it bakes an external dependency into our code. In my opinion, a global is bad in this situation for the exact same reason that using a class constructor can be bad—absolutely nothing would be improved here if AppDelegate were a constructor we could call, rather than a property.

What this all is crying out for is a form of dependency injection—which is why the first solution is often preferred, being a poor man’s dependency injection solution. Too poor, in my view, since it ties a class’s dependencies to the classes it might eventually be responsible for presenting. That’s craziness, and even worse than just using UIApplication.SharedApp... inline, if followed to its natural conclusion.

Thankfully, because we’re using storyboards, we can have the best of both worlds. First, yes: your methods should be dependent on a managedObjectContext property on your class, not directly referring to the global. Eliminate the global from inline code. Second, no: passing objects bucket brigade style from controller to controller isn’t the only form of injection available to us. Storyboards can’t set arbitrary values on the classes it instantiates—unfortunately—but it gives us a hook to handle in code any setup that it can’t: awakeFromNib.

The fact that awakeFromNib is in our class and not somehow external to it is a complete technicality. To the extent that we’re being pushed into doing the least unreasonable thing we can, using global or top level methods in awakeFromNib is fair game—this code is only ever run by the storyboard, at instantiation time. To be fair, awakeFromNib is a blunt instrument, but we needn’t live with its dictates, as plenty of other hooks are called before the controller is actually put to use. Ultimately, I view using awakeFromNib in this way as no different than specifying a concrete class to instantiate in a storyboard and connect to a view controller via a protocol-typed outlet.

(In this specific case, one additional thing I would do is have my own global function to return the managed object context, and call that in awakeFromNib, as a single point of contact with the “real” global. I’ll also note that I avoid having my view controllers directly dependent on NSManagedObjectContext as much as possible, which is another pattern I’ll be discussing.)

One last thing: why awakeFromNib and not initWithCoder? First, awakeFromNib is called in any object instantiated by the storyboard, not just views and view controllers. Second, it reinforces the special-cased nature of the injection, over the more general case of object instantiation. Third, outlets are connected by the time awakeFromNib is called, in case that’s ever a concern. Fourth, initializers are very clearly a proper part of their class, but awakeFromNib is, arguably, properly part of the storyboard/nib system and only located on the class for convenience, giving our class-proper code design a bit of distance from what goes on therein.

Handle view controller setup in UIStoryboardSegue subclasses

This one might be a bit more controversial. I’m going to see how it shakes out, long-term, but from a coupling-and-responsibilities perspective it seems a no brainer. In short: configuring a new view controller isn’t necessarily—or even usually—the responsibility of whatever view controller came before it. If only we had a class that was responsible for handling the transition from one view controller to another, where we could handle that responsibility. Wait, there is such an object—a segue. Of course, segues aren’t a perfect solution, since using them conflates animations with nuts-and-bolts setup. They are, however, a natural, lightweight mechanism for getting random crosstalk code out of our view controllers, and the field for setting a custom UIStoryboardSegue class is right below the field for setting the identifier.

If there’s one underlying theme throughout these patterns it’s “stop using view controllers as junk drawers for your code.”

iOS Design Patterns: Part I

I’m working on a brand-spanking-new iPhone app, for the first time in a while, and I’m trying to take a fundamentals-first, good-design approach to development, rather than simply regurgitate the patterns I’ve used/seen in the past. Here are three “new” approaches I’m taking this time around. Each of these patterns are broadly applicable regardless of your language or platform of choice, but with iOS development, and XCode, they can take a form that, at first, might look odd to someone used to a particular style.

Clean Up View Controllers With Composition

Ever popped open a class and seen that it conforms to 10 protocols, with 20-30 mostly unrelated methods just piled on top of one another? This is a mess: it makes the class harder to read and debug, it makes individual lines of logic harder to test and refactor, it can mean an explosion of code or subclasses you don’t actually need, and it precludes sane code reuse.

By applying the single responsibility principle—and the principle of composition-over-inheritance—we can mitigate all of those problems, moving code out into individual classes for each protocol/responsibility. You’ll win gains pretty quickly when you realize, for instance, that a lot of your NSFetchedResultsController-based UITableViewDataSource code is nearly identical, and a single class can suffice for multiple view controllers.

That goes for view code, too: If you’re poking around in the view layer it’s probably a good idea to do it in a UIView subclass. The name of the game isn’t to minimize the number of classes in your project, and separating code by function appropriately is the basis of good code design. For that matter, the name of the game isn’t merely “code reuse” either—whether or not you’ll ever take two classes and use them independently isn’t the mark of whether they should just be smooshed into one giant class.

Cut Your Managed Objects Down to Size

What’s the responsibility of your NSManagedObject subclasses? To coordinate the persistance of its attributes and relationships. That’s it. Taking that data and doing various useful things with it is not part of that responsibility. Not only do all those methods for interpreting and combining the attributes in various ways not belong in that specific class, but by being there they are manifestly more difficult to test and refactor as needed. If you’re looking at a bit of code to—I don’t know—collect and format the names and expiration dates of someone’s magazine subscriptions, why should that code be dragging all of core data behind it?

At a minimum, most of those second-order functions can be split out into a decorator class or struct. A decorator is a wrapper that depends simply on being able to read the attributes of its target object, and can then do the interesting things with reading and displaying that data—without involving core data at all. How do we eliminate core data entirely? By using a protocol to reflect the properties of the NSManagedObject subclass. Testing any complex code in your decorator is now a cinch—just create a test double conforming to that protocol with the input data you need.

This is a super simple example of a decorator I use to encapsulate a Law of Demeter violation. This illustrates the form, but the usefulness pays increasing dividends as the code gets more complex. Note, also, that you needn’t have a single decorator for a given model… different situations and domains might call for differentiated or completely orthogonal decorators. In that way, decorators also provide a way to segregate interfaces appropriately.

protocol ExerciseAttributes {
  var weightType: NSNumber? { get }
  var movement: Movement? { get }
}
extension Exercise: ExerciseAttributes {}

struct ExerciseDecorator {
  private let exercise: ExerciseAttributes?

  init(_ exercise: ExerciseAttributes?) {
    self.exercise = exercise
  }

  var name: String? {
    return exercise?.movement?.name
  }
}

Storyboards Can Help Manage Composition

I used to have a knee-jerk reaction to storyboards. They felt like magic and as if all they did was take nice, explicit, readable code and hide it behind a somewhat byzantine UI. Then I realized what they really do: they decouple our classes from each other. The storyboard is a lot like a container. It lets us write generic, lightweight classes and combine them together in complex ways without having to hardcode all those relationships, because there’s another part of our app directing traffic for us.

After you’ve moved all your protocol and ancillary methods from your view controllers, you’ll probably end up instead with a bunch of code to initialize and configure the various objects with which the view controller is now coordinating. This is an improvement for sure, but you still end up with classes that mostly exist just to strongly couple themselves to other classes. That glue code is also so much clutter, at best. At worst it has no business being in your view controller class at all, but for a lack of anywhere else to put it. Or is that so?

Amidst the Table View, Label, and Button components in the Interface Builder object library is the simply named “Object” element. The description reads:

Provides a template for objects and controllers not directly available in Interface Builder.

“Not directly available in Interface Builder?” Then what’s the point, if we can’t do anything with it? Ah, but we can do things with it: we can hook up outlets and actions, and configure the objects with user-defined runtime attributes. We can, simply put, eliminate large swaths of glue code by letting the storyboard instantiate our coordinating classes for us, configuring them with connections to each other and to our views, and even allow us to tweak each object on a case-by-case basis. All with barely any code cluttering up our classes.

You might have a strong intuition that a lot of that belongs in code, as part of “your app.” If so, ask yourself: if this belongs in code, does it belong in this class? Truly? Cramming bits of orphan code wherever we can just to have a place to put them is a strong code smell. Storyboards help eliminate that. Embrace them.

Huge drawback: The objects you instantiate this way have to be @objc, and as such you can’t have @IBOutlets for a protocol type that isn’t also @objc. This means you lose the ability to pass and return non-Objective-C types such as structs, enums, or tuples. This is really frustrating and a significant limitation on using the technique to clean up your view controllers more generally.

When Agile Isn't

About three years ago I left an NYC app startup—which I will not name here—after just over a year there. The immediate cause was personal: the emotional stress from an increasingly perilous interpersonal environment on top of an unsustainable and severe “crunch time” schedule brought me to my breaking point. Before too long the root causes that ultimately underlaid my own departure brought the entire company to the breaking point, as well. After this all went down I went through a few phases, emotionally: guilt, anger, and finally acceptance of everything that happened and my role in it all.

A lot of things were conceived of wrongly, planned wrongly, executed wrongly, and finally went wrongly. This isn’t just a story of how a short-funded startup with a popular niche app (at one point featured in an early iPhone “there’s an app for that” TV advertisement) gets put into the ground. Instead, this is a cautionary tale of what can happen when Agile goes wrong. It’s easy to claim to be “Agile” when what you really mean is that you’re just too small to have built up a bureaucracy around software development. In my case, there were some big warning signs that I completely ignored or didn’t know to look for, until it was too late.

The Complete Rewrite

What really attracted me to the company at first was the somewhat unique combination of a popular, seemingly simple app with a loyal user base that also had an atrocious user experience. It was well into the second version, both written by a contract studio, and had accumulated a fair bit of cruft around an initially awkward navigation scheme. It was a fantastic opportunity—fix up the design, make a popular app even better, and put a nice feather in my cap while simultaneously ridding the company of a chain around its ankle. I know I went into the position knowing we were going for “complete redesign,” but I’m pretty sure we didn’t decide on the complete rewrite until later. I don’t think there was any serious discussion of NOT doing a complete rewrite. That was a mistake.

The rewrite is basically pressing a reset button on your app. Almost everything gets thrown out, code-wise. You usually also decide to take advantage of newer tools and technologies, so some knowledge, experience, and time gets thrown out as well, in effect. Any testing, code confidence, or support history is gone. You’re starting completely from scratch. It can be pretty appealing, especially if you have no sentimental attachment to the current code base. It can also be disastrous.

In our case, a rewrite meant cutting ourselves off from Our App 2.0, our users, and all of our success up to that point. Rather than iterating feature-by-feature, cleaning up and improving the app by degrees, and letting our users come along with us while we improved a stable code base into the app we wanted we instead effectively stopped development—from the outside perspective—and got lost in an increasingly hellish year of trying to recreate what we already had. Finally, when the app was released, an explosion of user anger completely rejected the rewritten app, which plummeted to single star territory in the App Store.

The Grand Vision

At the root of any guilt I felt after the company folded was my role at the very start of the process. I don’t remember if this happened after we decided on the rewrite or if it was part of the discussion. Either way, I went into a planning meeting for 3.0 to pitch my vision, and it was a doozy. In hindsight, it probably could have been a 3-5 year vision, or even just a concept around which to build reasonable, real-world plans. In actuality, everyone either loved it or accepted it without much comment and it became our 6-month blueprint. Words and phrases I used—often simply as concept or metaphor—were explicitly applied to features and concrete elements in the app. Some of them even ultimately appeared on the marketing website after release.

Even Agile, which eschews specifications and up-front planning, has an ultimate objective that, at its core, will remain more-or-less fixed, barring catastrophe. The Vision was at once too much and too little—too ambitious and over-specified to give us the flexibility to adapt in order to preserve our core objective, and too underspecified to let us anticipate, plan for, and handle the problems we were going to hit. It got to the point where I would cringe every time someone would use one of those words I threw out during the pitch, which were increasingly sacrosanct, even as development dragged on and things just weren’t coming together.

The Business Case

I have to say, first, that I have the business awareness of a fruit fly. I once accidentally got going on a rant about “vulture capitalists”—while talking to a venture capitalist. If you need me outside of the dev shop for anything it’d better involve coaching and clear expectations of what it is I’m needed to wear, do, and say. That said, I understand the point of a business is to make money, and ultimately the responsibility of the CEO is to see to that. I have tremendous sympathy for a guy trying to turn an early, surprising success in the app space into a going concern with just a few months of time and with less than a million dollars raised.

What that meant for the developers is that the concept that was pitched and embarked upon for 3.0 quickly became an iron-clad part of the business plan. I was in no way involved in or privy to the money-raising or deal-making, but I saw how quickly what should have been a “stretch goal” became a hail mary, a hill for the company to die on for lack of anything else to do. It’s hard to be “agile” when your “minimum viable product” is determined not by your users or the features you want and can implement quickly, but by the need to meet unyielding business case requirements. Our “minimum viable product” was the company-saving potential of a “Wow!” release that would knock everyone’s socks off.

That reality, spoken or unspoken, set the tone for the majority of the development process, but it also manifested in very concrete ways, particularly during the end. We were writing code, designing interfaces, and implementing features for at least one business development deal that could be described, at best, as “ancillary.” Another was in the talking-and-planning stages but, thankfully, didn’t progress to the point of actually bogging down the development work. Agile, like any methodology, must be put to use in service of the company’s broader business goals—but it is also easy for it to be sabotaged by specific business goals. With money running out and everyone on edge it can be hard to see the difference.

The Unending Marathon

For me, the most important tenet of Agile is “release early, and release often.” This is the distillation and union of three simple ideas: the minimum viable product, the sprint, and producing “usable software”.

The “MVP” can mean a lot of things to different people—there’s a graphic out there somewhere ridiculing the idea that the MVP for an automobile is a push scooter, rather than, for instance, a bare chasis with 4 wheels and an engine. Whatever actual form it takes, in order to be meaningful it has to be two things: (1) Comprised mostly or entirely of code and features that will continue to be relevant for the entire remaining development process and (2) A fairly small fraction of the entire development process. Simply put: you need both something small and something you can build around.

A sprint is a well-defined period of time (1 or 2 weeks), at the beginning and end of which you should have a high quality piece of software. This is the “usable software” part—sprints are meant to be a sustainable, cyclical process of evolving a piece of software incrementally while maintaining high standards for the product at each junction.

For various reasons—implicit cultural reasons and unspoken business reasons—we weren’t ever going to consider releasing ANYTHING until we had 3.0 wrapped up in a bow. Unfortunately, it’s really easy for “MVP” and “usable software” to become a joke when nothing actually has to get released to anyone. Sprints become simple deadlines, which are blown through or extended as convenience requires. Our idea of “usable software” was whether the damn thing compiled and passed tests, not whether we had a piece of high-quality, releasable software at the end of each sprint.

On the flip-side, we went through a lot of changes with each sprint. Sprints felt like an excuse or opportunity to make or propose changes in a pretty ad-hoc fashion. Every time a sprint ended and I looked at the awkward, half-finished app there was another big tweak I wanted to make to steer it back towards the Vision. The sprint imposed upon our rigid big-picture plan a lattice which brought the lack of small-picture plan into stark relief. It was easy to see where things on a high level weren’t working, and all I could do to fix that was muck with them on the immediate scale of the sprint and whatever features we were working on at the time.

The closer we got to the finish line, the further away it seemed to be. At the end it felt like we were standing still. It was very distressing to find myself, after 10 grueling months, staring across the canyon between where we were, and where we needed to be a month ago—with no path across. It was increasingly paralyzing, and the only plan anyone else had was just to churn through the remaining holes to get us up to the near edge of the cliff. Rather than let the MVP, each sprint, and our releases guide our development we had random-walked right into oblivion.

And So It Goes

Like I said at the beginning, a lot of things went wrong. I don’t know how things would have turned out if we had been more Agile, in a meaningful way. I’d like to think we’d at least have put out some pretty good software releases along the way. Whether we’d have kept the amity of our users, or satisfied our investors… who knows. Maybe there wasn’t any winning—we were short on time, money, and room to maneuver in our industry. Maybe, in the end, we were always fated to be one of the 90% of startups that fail. The only thing I really know for certain is that whatever we were, from start to finish, we weren’t Agile.

How Do I Unit Test This?

Hang out in IRC, Slack, or Gitter rooms for OS projects for a few days and before too long you’ll see someone ask how to unit test some part of their app. It’s particularly common with large frameworks that encourage inheritance over composition, which usually results in a great deal of environmental setup standing in the way of efficient, automated testing on a unit basis. It sometimes makes me feel bad, but usually my answer is: you can’t.

If you’ve lashed your code so tightly to your framework that you need to jump through hoops to test it, then you’re almost certainly not unit testing it. Testing code that’s in a subclass of ActiveRecord::Base is an integration test. Testing how an Angular component renders using the framework’s templating system is an integration test. It’s hard to write a unit test when your app is forcing you to write an integration test.

Why do we even test at all?

When it comes to testing—any testing—one must always keep in mind that the actual point of the testing is to help us write better software, not to meet some quota for code coverage or tests written. So many devs are content to write bullshit, space-filling tests just to keep up appearances, or out of a sense of obligation. The emphasis in some communities (cough-ruby-cough) on “test-driven” design or development is particularly problematic here, since too often there’s an over-emphasis on writing the test first as the only hallmark of TDD, and a complete ignorance of how to let the test drive the code—the actually important part.

“Best practices” or “being idiomatic” aren’t magical outs here, either. Design patterns and best practices are great, insofar as they actually result in good code design. It is self-evident, however, that if the way a developer codes and tests is predetermined by cookie-cutter-style conventions then that developer is not letting the tests drive anything other than the clock. While this often puts the lie to claims of test-driven development, it isn’t just a concern for aspiring practitioners of TDD—Awkward, jury-rigged, and brittle tests should be setting off alarms and clueing us into code smells and technical debt whether we write tests before the code or after.

Just calling it a unit test doesn’t make it a unit test

When it comes to unit testing in particular—where TDD is most natural and effective—there are two rules to follow in order for something to be a unit test, in a meaningful sense:

  1. You need to be able to mock any dependencies of the unit
  2. You need to own all the dependencies of the unit

These rules, just like the rule of writing tests at all or writing the test first, are in service of a higher goal: allowing the process of writing tests to make it clear to us where our design needs to change. This is the most basic way tests “drive” development—by encouraging design choices that make it possible to test in the first place.

Two approaches to mocking

One approach to the first rule is to figure out how to reduce and simplify your dependencies. Just by chopping up one class into several—each with one or two dependencies—the code almost magically becomes much more easily tested, refactored, and extended. This is a classic example of the test driving the improvement of your code by encouraging the separation of responsibilities.

The second approach is to look at your oodles of dependencies and piss and moan about all this mocking you have to do. Slog through it for a few hours. Pop into a chatroom. Let someone tell you that you can just test directly against the database. Write an integration test disguised as a unit test. Finally, call it a day for the rest of your career.

One reason so many developers insist on the tests adapting to fit their design, rather than the other way around, is because it isn’t actually their design at all. Frameworks that encourage code to be piled into a handful of classes that fit a set of roles determined by some development methodology do developers a disservice. Frameworks aren’t bad, necessarily, but when it’s considered “best practice” for the developer to forfeit all responsibility for their app’s architecture and their code’s design, it makes it impossible for the developer’s tests to inform the development process.

If you start off by subclassing someone else’s code you’ve almost certainly fallen afoul of the first rule right from the start. You’ve introduced a massive, irresolvable dependency into the very foundation of your code. Sometimes you’ll have little choice but to rely on scaffolding provided by the dependency in order to test your own code, integration style.

The two operative words in the first rule are “you” and “able.” The rule isn’t “It must be theoretically possible for someone, with unbounded knowledge of the dependencies, to mock the dependencies,” or even “You need to have mocks for the dependencies, from wherever you can get them.” If you can’t look at the class and immediately know what needs to be mocked and how to mock it, that should be a huge red flag.

Only mock what you own

The second rule is a consequence of a third rule: only mock what you own. You own your project’s pure classes, and to the extent that you subclass you own whatever logic you’ve added. You don’t own the base classes, despite their behaviour being incorporated via inheritance. This is another rule where the face value isn’t so much the point of it as the consequences: by only mocking your own classes, you’re pushed into building out facade and bridge classes to formalize the boundaries between your app and any external systems.

Tests are much more confidence-inspiring when the mocks they depend on are rock-solid doubles of tiny classes each with a single responsibility. Tests that instead stub one or two methods on a huge dependency are brittle, are prone to edge cases, increase coupling, and are more difficult to write and tweak with confidence. Tests of classes that themselves have to be stubbed are almost worthless.

Thinking outside of the class

Following these three rules can help put the focus back on writing well-structured, maintainable code. It’s not always obvious, however, what changes need to be made. If a developer is staring at a class that descends from ActiveRecord::Base, and which includes a couple of plugins, along with a raft of methods that all need to be tested it’s understandable to look askance at the notion that AR and those plugins should be expunged in order to test the class. After all, without AR they don’t even have a class to begin with, right? The path of least resistance all too often is just to write an integration test using the entire stack.

In these situations one must keep in mind that “unit” and “class” are not identical, and to ask not “how can I possibly remove these dependencies from my class” but “how do I remove my code from this class, which I don’t really own?” By moving those methods off to other classes as appropriate (formatting, serializing, and complex validations are things that might be on an AR class that can easily be broken out into their own plain-old-ruby classes) we’ve accomplished the same thing. So much ActiveRecord-dependent code can be refactored to depend only on a hash (or OpenStruct) of attributes.

It’s possible to use monolithic frameworks and still care about good design. Finding ways to take ownership of our code away from the framework is crucial. Your tests should be a searchlight, pointing out places where your code is unnecessarily tangled up in someone else’s class hierarchy.

Preventing bad testing habits

Developers often begin their professional life with a few high-level heuristics that are, unfortunately, continually reinforced. A few relevant ones:

  1. Minimize the number of classes to write and test
  2. “DRY” code up by relying on libraries as much as possible
  3. MVC means my app is made up of models, views, and controllers

It’s not difficult to see how these lead to large, fragmented classes tightly coupled to oodles of dependencies. The resulting code is going to be difficult to test well in any circumstance, and will bear little resemblance to anything that was “test-driven.” I’d like to suggest some replacements:

  1. Minimize the number of dependencies per class.
  2. Minimize the number of classes dependent on an external dependency.
  3. Write the code first, worry what “category” each class falls into later.

The first will result in more classes, but they’ll be more easily tested, refactored, and maintained. The second encourages dependencies to be isolated into bridge, adapter, or facade classes, keeping the dev’s code dependent on interfaces he or she owns. The third breaks the MVC (among others) intuition pump that says every class we write has to fit one of two or three possible roles. A dev utilizing these heuristics will find themselves asking “how do I unit test this?” far less frequently.

Now, “how do I integration test this” is a different question entirely… more on that later.