F# Pain Points
Throughout this long learning process of building and rebuilding a program, there have been a few consistent points that I’d trip over when using F#. Some were more noticeable at the beginning and others only became irritating after repeated use. Coming from C++ I’m used to avoiding language features that are more trouble than they are worth, so I was hoping the F# didn’t have as many or as nasty ‘gotcha’ moments. I don’t think any of these are show stoppers, but they are worth mentioning as drawbacks of the language. These aren’t well-rounded critiques and I’m honestly still learning that some of these have been addressed in some way, but they were notable speed-bumps in my already rocky adventure in learning F#.
Array mutability
I mentioned before that I relied on F#’s default immutability to ensure that passing records around didn’t introduce a strange chain of side effect dependencies. Array mutability breaks this for performance and interoperability reasons, which can be great until you get bit by not giving this extra care when working with multiple threads. Arrays aren’t all that special otherwise, they share function names with the other immutable collections are can be used without mutating their members. The assignment itself is still pretty explicit and has special syntax, but that’s all hidden when it’s happening in a deeply nested record.
I mentioned before I was using actors (F# mailbox processors) as a core component of the program dispatch. They are split up to handle the transformation/validation of serial inputs, the sequential processing loop, and the output conversions with fanout. What happens when you send a message from the processing actor to the output actor that contains some arrays within deeply nested record? Sadness.
Since working on the most recent version of the program with a deeply nested record, this was the only technical bug that came from just not knowing the language well enough. I wasn’t thinking about threading/sharing problems because I thought that’s what functional languages were great at managing with immutability. It took a few hours to realize my misconception and implement a workaround. Just cloning the whole thing would have been painfully slow since the array could grow into the tens of thousands, but I didn’t know how to easily wrap it in a copy on write container such that I could avoid that overhead. I ended up pushing some of my output logic into the processing actor so the message contents were pre-computed without additional overhead.
For the F# rhetoric being so gung-ho about easily supporting parallelism this was a minor, but annoying, setback. I wasn’t even using async blocks anywhere else in my code, not to specifically avoid them, but because they didn’t solve a problem I was having. The thought of immutability informed my design decisions, but my implementation wasn’t able to take full advantage of how I thought it might work. I felt I was missing out on one of the strengths of the language if I kept using arrays, but I often needed them for numeric library calls. After dealing with this I know I’ll still have to treat array usage with extra caution if I do need them because of this default mutability.
Completeness
This comes mostly from the ease of building quick abstractions and refactoring patterns for the business logic. For how large the rest of .NET and the F# core library is, not having the function (T -> Option -> T) was distressing (defaultArg with the argument order switched for easier chained application). The argument order works out naturally 90% of the time, which signals good design, but there are plenty of times where I need to just swap the order of arguments to make pieces compose easily, which requires a pretty verbose lambda in comparison to the rest of a very terse language. The best thought I’ve had so far is that the arguments that change most often go last since it makes it easier to partially apply and pass around as a closure. I often asked myself if I should I have restructured my problem such that the order of arguments for built-in functions was more natural or if should I make more wrappers on the built-in functions for different argument orders. Most of this knowledge of available functions and patterns was magic to me, and I didn’t like having to second guess consistency or availability when I was trying to focus on other problems. In more rudimentary languages like Go, I’d know that I’d have to pull in an external library or do most non-trivial functions myself, but with F# providing a much wider breadth of tools, I sometimes struggled to decide if I should keep looking for a component to do non-trivial work or if I should just knock it out myself.
There’s probably half a dozen common patterns that I’ve needed to build my own variants of core library functions. I don’t know if I’m just doing things wrong to have such common needs for just slightly different non-standard functions. I’d assume the functions I’ve created exist under names where I can’t find them. For example, I’d realized I had re-created lenses to ease some of my unwrapping problems, but if I hadn’t just been browsing around for tools I never would have known such a pattern existed! Sometimes the patterns are esoteric enough that the language maintainers probably expect most users to implement these because they aren’t used often enough in every application domain. There might be more general forms of the patterns that could help, but keeping everything single-purpose and simple had other major benefits. I had a few experiences where I wanted to do something that I thought would be obvious and found that while it’s totally possible, it’s non-trivial to understand and use (computation expressions). Other times I’d realized that building a reusable pattern was more work that it’s worth if the snippet is small and clear enough to see in a few unrelated places. To keep my code simple I probably relied more on copying code than I would have if the components I was copying were bigger. This was just super tough to find a good tradeoff since I wasn’t yet acquainted with the level of abstraction that F# provides.
For data exploration and graphing, some noise has been made about F# deficiencies, and it doesn’t take long to find that Deedle and F# Charting are hardly a replacement for MPL and Python’s pandas. Because I’m so much more familiar with Pandas I’d often get stuck with strange workarounds for the tasks that weren’t built into Deedle, it’s very clearly not as heavily adopted as I expected (more on the later) so anyone who is seriously using it has probably written hundreds of functions to make it comparable with R. The advantages of F# were not clear here, all of that type goodness went away when every key is a string. Working with data frames rarely produced fewer run-time errors and didn’t run noticeably faster than Python for the scale of data I was using. The biggest things I noticed were the lack of easy joins, alignment for data on different sample intervals and preference for C# compatibility over a first-class F# experience. I wasn’t too surprised by any this, but it was a disappointment nonetheless. For most tasks I just used the CSV type provider since interfacing with Deedle was only worth it if I was leveraging something really powerful. I have the opposite problem with Pandas, where it tends to sneak in even where it’s not quite needed because of how powerful and convenient it can be.
C# Interop
Most of the ink in the F# docs is spilled over the OO features and C# interop. I’ve never had a painless experience of using C# code from F#, and I never even tried going the other way or calling out to another language from F#. In terms of project management and tooling the story is much better than most other hosted languages, but in terms of language feature compatibility, F# isn’t like Kotlin. Support for units of measure is my biggest pain since it requires lots of manual conversion/checking on any boundary that compromises the integrity of the feature. The next is exceptions and nulls. Most things in F# don’t throw or return them, so working with them can be rare. C# code has no qualms about throwing exceptions for errors or null return values, which makes working with it in F# immediately painful. F# code that does throw generally has a try version that returns an option instead. Avoiding these F# exception throwers I think is a big first step into focusing on the less happy code paths that exceptions often gloss over. But with this additional handling code, I’m finding that I’m often catching broader situations than I meant to handle. Putting all the assumptions into the type system is great, but eventually, you have to do the work to unwind it properly it as well, which can just as tedious as encoding it in the first place. There are also places where there just isn’t yet a F# style interface to some of these, the most obvious being the TryParse methods and the collection aggregation functions throwing on empty collections. Both of these just made more work for me where I’d rather have the language be 100% consistent because the 1% inconsistencies hurt more when I wasn’t anticipating them.
In terms of working with the rest of the C# and .NET ecosystem, it seems like F# is always in catch up or fix up mode. Either some VS or .NET system change rolls out for C# and VB.NET, but F# doesn’t catch support until later, or F# leads the way on a feature and then falls behind when the .NET runtime is modified to specifically support the feature for C#. Because C# is just so much bigger than F#, it feels like it just rolls over any F# concerns. It’s never C# making compromises in large libraries when working with F#, it’s always F# that needs to do the crazy workarounds for good C# interop. This underdog mentality is just pervasive in the number of unmaintained projects where it seems like the primary goal is just to provide a suitable F# experience for the dominant C# tools. I don’t know how the Scala, Clojure, and Groovy communities deal with this stepchild problem, but F#’s close relationship with Microsoft doesn’t put it any better position than if it was maintained by a 3rd party.
Readability
My F# code doesn’t just need to interop with C# code, but with C# and Python devs. Some of the logic comes through quite nicely, but when I’m doing some relatively clean composition I’ve seen readers often get lost in binds, maps, », |> and Some _ when pattern matches. Their attention level drops off as I try to explain what it means and doesn’t come back to the real logic at hand. It was so easy to mix this functional composition style with the rest of the business code when first putting the system together since the ease of composition often informed me of how closely the logic was related. I find the terse version with the routing and logic mixed more conceptually clear only if you grasp the supporting syntax; if you don’t, then the code comes across as the worst kind of magic noise that non-programmers see when they look at the code. It’s a tough trade-off when abstracting because adding more functions and names to things can often make it less clear as a whole since you end up creating nonsensical specific names for the general transformation and composition steps just to separate what’s considered the base case of the computation. The final logic is often trivial math, but the routing and data management (as expressed via function composition) to get there can be the tricky part.
Even attempting to review the code outside of an IDE can be a pain since I’m so heavily relying on Intellisense for types and compilation. On one hand, it’s great that the the types don’t make an appearance for the non-programmers looking at the code, since if it’s abstracted correctly it can read better than the best written C++. The problem is when more technical questions arise around a type’s usage and you can’t answer it without having the compiler check if that’s valid everywhere. The ability to look at some code and have the context to mentally compile it within that scope does speak to having a simpler language with standard OO encapsulation. For so heavily trumpeting the importance of readability and understandability, I find F# code can lose out to reading verbose Python code that has fewer special characters and hidden parameter passing. Some of this is a result of my abuse/love of pattern matching and standard of using only try functions, but I’d like to have both power and simplicity without compromising either. I think I better understand the appeal of gradual static type systems like Dart and Typescript that can provide some of the benefits and only some of the downsides. Full type inference is a huge boon that provides many benefits and few downsides, but it can be a detriment without the right tools.
Type flexibility
When first getting into the ML functional programming literature I heard many statements to the effect of “If you can get it to compile, it’s probably right”. In my experience with this and previous programs in other non-ML languages, nothing could be further from the truth. It has been a very long time since I’ve had to fight a C++ compiler for an implementation to compile, so most of my C++ code was just as ‘right’ as my first pass of F# code. I grew up in statically typed languages, so I’ve always thought about my solutions in terms of types. I never fought the F# compiler to prove correctness, most of the time it would just catch sloppy code as I finished typing it. It was very rare that it would catch something useful other than incomplete pattern matches. The type system is easy enough to use that it could catch bugs like control flow dead-ends or misplaced usages that would require an exhaustive test suite in a dynamic language to catch. The quick turnaround to find type errors without tests is a big productivity boost, but I’ve had just as many or more logic bugs as I do in Python that I wasn’t able to encode into the type system. The bugs are often more sinister since they can appear ’to do the right thing’ because they meet all the type preconditions, but will hit the error paths in other areas later just like with dynamic languages. With F# the bugs will manifest as strange program behavior around collections instead of a type or attribute error. Because I wasn’t always tracking down simple errors, most of the time I spent debugging was in these types of situations. With type errors in a dynamic language, the failure is more localized (or least is directly traceable on the call stack) instead of hunting down and watching the function that manipulates a collection. Pushing everything inside values didn’t make the program more robust, but it did provide more opportunities to explicitly handle errors. I generally underestimated how tough writing good error handling code was in comparison to less strict environments, even with a good type system.
It’s great at the high level to sketch pieces out and provide enforceable interfaces, but at the lowest level when dealing with the primitives all of that is greatly limited. As much as I like the safety net, the lack of higher kinds or structural polymorphism felt like a hindrance when trying to quickly and concisely string together pieces of the system. I fell into a pattern that was biased against generic usage and instead always focused on the specific case at hand. I’m well versed enough in the raw power of C++ templates that I’m often jarred at how limited the type system could be when most of the code would qualify as very meta-programming friendly. I haven’t tried using code quotations or type providers yet, but I wasn’t expecting to write as much single-purpose code because of type constraints as I did. The reason that the data collections are the lingua franca of F# is because it’s far more work to make good generic types that play well with all of the existing functions. This feels like a commentary on OO since the problem is reversed there where it’s hard to make good data collections because everything can be made so abstract.
Don’t get me wrong, I love the strong inferred static type system for everything else it brings to the table for tooling and performance, but it’s no silver bullet when the effort for learning it didn’t pay off as I expected. With the exception of units of measure, I never was really able to leverage the type system to prevent the problems I normally encountered in other languages.
Support & Adoption
For the scientific domain, F# is small potatoes compared to Python and C++. Every analytical process I needed was readily available in Python and had some C/C++ equivalent. The .NET ecosystem just hasn’t developed the same way other open source ecosystems have for numerical methods and data science. It’s a sad day when I find multiple Julia implementations of a filtering algorithm I need, but no freely available .NET code (even commercial tools aren’t much better). I’ve even bumped into quite a bit of scientific .NET code that admitted to being a poor FORTRAN port (and it was), so it’s not even an argument of quality vs. quantity. There just isn’t the same mass of people producing integratable components for F#/.NET in my domain as there are for other languages. There is some ongoing work on this front, and I think F# is far better suited to some of this work than C# would have been, but there’s also a long way to go and it’s more than I could do on my own since I have this project to do.
In terms of general language trajectory, F# appears to be going nowhere in particular. It doesn’t have the best tools of any particular niche and it’s trying to tackle general purpose programming problems of which there are far better-entrenched alternatives. The ‘killer app’ for a language is a library, framework, or company that pushes adoption of the language to prominence, F# doesn’t have any yet. It has plenty of F# projects that are good by themselves or as ports of other languages’ good frameworks, but nothing that would be the deciding factor among the many upcoming functional programming languages with more momentum. F# makes so many compromises in so many different areas that I’m afraid it will always be treated as the 2nd best tool for any job. The only clear advantage it has here is that “it’s not C#”. The community support is fragmented and is small enough to be spoiled by a few bad apples. Much smaller languages in terms of adoption have much more vibrant and passionate communities. Since F# inherits some of C# community, the attitudes are very conservative and it’s clear most people are investigating it as a toy or to learn and not as the solution to their problems. The language itself might internally move pretty fast while being very stable, but the community is kinda sluggish just based on culture. I’m still optimistic on the language because it has Microsoft behind it for now and it might have some great applicability in the future for cloud-scale in process parallelism.
Diagnosis
The problems I’ve had with F# the language are mostly symptoms of it being a language with a very small mindshare and my own misconceptions of what a functional language should be. I’m still learning how best to use the type system and set my other expectations about the .NET ecosystem. I don’t have any specific syntax or design quibbles on the language, but I try to keep abreast of the new features and suggestions. There aren’t any specific upcoming language features I’m excited about, but I am eagerly anticipating the rapid infrastructure changes of the .NET ecosystem which might address the problems of support and adoption.
I’m still willing to move forward with my project after all I have learned. Some of it is because it is a sunk cost, but I’ve also not found a better language for the way I think or the things that I want to do. When I’m working in other languages I’m not constantly thinking about how much easier or cleaner this would be if I could do it in F# than with the current language’s idioms. The experience so far has taught me to be more introspective and honest about capabilities and expectations. Learning has F# been the impetus for changing the way I think about programming as a business and a profession. For as crazy as I needed to be to get this far, I think the next step is going to be passing some of that on.