☆ Yσɠƚԋσʂ ☆

  • 1.36K Posts
  • 2.22K Comments
Joined 5 years ago
cake
Cake day: January 18th, 2020

help-circle

  • What I’m saying is that most good static typing systems do not practically have such limitations, you’d be very hard pressed to find them and they’d be fairly illogical. Most static typing systems that are used in enterprise do have limitations because they are garbage.

    Of course they do, it’s silly to claim otherwise. Some type systems are certainly more flexible than others, but each one necessarily restricts how you can express yourself. Not to mention the fact that advanced type systems introduce mental overhead of their own. The more flexible the type system is the more complex it is as a result. There was even famously a debugger for Scala type system illustrating just how absurd things can get. I’ve used plenty of typed languages including Haskell, so I understand perfectly well how modern static typing works.

    Meanwhile, I’d argue that Typescript provides incredibly weak guarantees in practice, and the impact of transpiling on the workflow is not insignificant.

    My experience is that immutability plays a far bigger role than static typing. The best pattern for ensuring correctness and maintainability is to break things up into small components that can be reasoned about independently. Any large project can be broken up into smaller parts, and that’s by far the best approach towards ensuring correctness that I’ve seen.

    Again, that’s my experience working with many different languages for over two decades now. I’m not suggesting other people can’t have their own preferences.


  • That’s not what I’m saying. I think static typing introduces a certain set of trade offs that some people prefer. You restrict the set of statements that are possible to express to ones that can be checked by the type system, and as a result you get additional compile time guarantees. For example, Lemmy devs prefer this trade off and it has nothing to do with enterprise workflows.


  • I agree, the language alone isn’t a silver bullet. I’m not suggesting that it is. You still have to implement good workflow, do testing, code reviews, architecture design, and so on. All these things are language agnostic. What the language can do is reduce friction in your workflow, and nudge design in the right direction by making it easier to do the right thing. I largely see it as a quality of life improvement.

    Also, I’m not saying that patterns like adapters don’t have their uses or that you might not use a similar approach in a functional language. My point was that these types of patterns tend to be more pervasive in mainstream languages.

    Static typing itself is a trade off as well. It introduces mental overhead because you are restricted to a set of statements that can be expressed using a particular type system, and this can lead to code that’s written for the benefit of the type checker rather than a human reading it. Everything is a trade off in practice.

    Finally, the choice of language ultimately depends on a particular team. Different people think in different ways, and have different experience. The best language is the one that majority of the team is comfortable using. Hence, I’m speaking here from my personal perspective of the way I enjoy doing development. This will necessarily vary from person to person.


  • This is absolutely true, however I don’t particularly value this feature because most engineers typically already cannot separate concerns very well in industry so IMO if I had this I would not want people to use it. Very much a “it works ship it” trap.

    That’s been the opposite of my experience using Clojure professionally. You’re actually far more likely to refactor and clean things up when you have a fast feedback loop. Once you’ve figured out a solution, it’s very easy to break things up, and refactor, then just run the code again and make sure it still works. The more barriers you have there the more likely you are to just leave the code as is once you get it working.

    This is where you lose me, you still have wrappers and adapters, they’re just not classes.

    A good explanation of the problem here https://www.youtube.com/watch?v=aSEQfqNYNAc

    When you’re dealing with types or classes they exist within the context they’re defined in. Whenever you go from one context to another, you have to effectively copy the data to a new container to use it. With Clojure, you have a single set of common data structures that are used throughout the language. Any data you get from a library or a component in an application can be used directly without any additional ceremony.


  • Yeah you can definitely have this kind of stuff in other languages.

    It’s not even remotely comparable. Outside Lisps, I have not seen any environment where you can start up your app, connect the editor to it, and then develop new code in the context of a running application. I also find that language design very much impacts conventions and architecture. Clojure’s focus on immutability naturally leads to code that’s largely referentially transparent and where you can reason about parts of the application in isolation without having to consider side effects and global state. Meanwhile, focus on plain data avoids a lot of the complexity you see in OOP languages. Each object is basically a state machine with an ad hoc API on top of it. You end up having to deal with graph of these opaque stateful entities, which is incredibly difficult to reason about. On the other hand, data is inert and transparent. When you pass data around, you can always simply look at the input/output data and know what the function is doing. Transforming data also becomes trivial since you just use the same functions regardless of what data structure you’re operating on, this avoids many patterns like wrappers and adapters that you see in OO style. My experience with Clojure is that its semantics naturally lead to lean systems that are expressed in terms of data transformation pipelines.

    Again, this is my personal experience. Obviously, plenty of people are working with mainstream languages and they’re fine with that. Personally, I just couldn’t go back to that now.


  • I found Clojure jobs were generally pretty interesting. One of my jobs was working at a hospital, and we were building software for patient care. So we got to go to the clinics within the hospital observe the workflow, builds tools for the users, and then see how it improved patient care day to day. It was an incredibly rewarding experience.

    For me, the language matters a lot, and Clojure is the only language that I’ve used for many years that I’m still excited to write code in. Once you’ve worked with a workflow that’s fully interactive, it’s really hard to go back. I really enjoy having instant feedback on what the code is doing and being able to interrogate the app any time I’m not sure what’s happening. This leads to an iterative development process where you always have confidence that the code is doing exactly what you expect because you’re always exercising it, and experimentation become much easier. You can just try something see the result, and then adjust as you go.



  • Clojure jobs are definitely around, I got involved in the community early and wrote a few libraries that ended up getting some use. I also joined local Clojure meetup, and ended up making some connections with companies using it. I’ve also worked in a team lead position in a few places where I got to pick the tech stack and introduced Clojure. I didn’t find it was that hard to hire for it at all. While most people didn’t know Clojure up front, most people who applied were curious about programming in general and wanted to try new things.






  • It’s really impressive to think what was achieved with such limited hardware compared to today’s standards. While languages like Clojure are rediscovering these concepts, it feels like we took a significant detour along the way.

    I suspect this has historical roots. In the 1980s, Lisp was primarily used in universities and a small number of companies due to the then-high hardware demands for features like garbage collection, which we now consider commonplace. Meanwhile, people who could afford personal computers were constrained by very basic hardware, making languages such as C or Fortran a practical choice. Consequently, the vast majority of developers lacked exposure to alternative paradigms. As these devs entered industry and academia, they naturally taught programming based on their own experiences. Hence why the syntax and semantics of most mainstream languages can be traced back to C.


  • Common Lisp and Smalltalk provided live development environment where you could run any code as you write it in the context of your application. Even the whole Lisp OS was modifiable at runtime, you could just open code for any running application or even the OS itself, make changes on the fly, and see them reflected. A fun run through Symbolics Lisp Machine here https://www.youtube.com/watch?v=o4-YnLpLgtk

    Here are some highlights.

    The system was fully introspective and self-documenting. The entire OS and development environment was written in Lisp, allowing deep runtime inspection and modification. Every function, variable, or object could be inspected, traced, or redefined at runtime without restarting. Modern IDEs provide some introspection (e.g., via debuggers or REPLs), but not at the same pervasive level.

    You had dynamic code editing & debugging. Functions could be redefined while running, even in the middle of execution (e.g., fixing a bug in a running server). You had the ability to attach “before,” “after,” or “around” hooks to any function dynamically.

    The condition system in CL provided advanced error handling with restarts allowed interactive recovery from errors (far beyond modern exception handling).

    Dynamic Window System UI elements were live Lisp objects that could be inspected and modified interactively. Objects could be inspected and edited in structured ways (e.g., modifying a list or hash table directly in the inspector). Modern IDEs lack this level of direct interactivity with live objects.

    You had persistent image-based development where the entire system state (including running programs, open files, and debug sessions) could be saved to an image and resumed later. This is similar to Smalltalk images, but unlike modern IDEs where state is usually lost on restart.

    You had knowledge-level documentation with Document Examiner (DOCX) which was hypertext-like documentation system where every function, variable, or concept was richly cross-linked. The system could also generate documentation from source code and comments dynamically. Modern tools such as Doxygen are less integrated and interactive.

    CL had ephemeral GC that provided real-time garbage collection with minimal pauses. Weak references and finalizers are more sophisticated than most modern GC implementations. Modern languages (e.g., Java, Go, C#) have good GC but lack the fine-grained control of Lisp Machines.

    Transparent Remote Procedure Calls (RPC) allowed Objects to seamlessly interact across machines as if they were local. Meanwhile NFS-like but Lisp-native file system allowed files to be accessed and edited remotely with versioning.

    Finally, compilers like Zeta-C) could compile Lisp to efficient machine code with deep optimizations.



  • For sure, it’s a lot easier to do a lot of stuff today than before, but the way we build software has become incredibly wasteful as well. Also worth noting that some of the workflows that were available in languages like CL or Smalltalk back in the 80s are superior to what most languages offer today. It hasn’t been strictly progress in every regard.

    I’d say the issue isn’t that programmers are worse today, but that the trends in the industry select for things that work just well enough, and that’s how we end up with stuff like Electron.