Hacker News new | past | comments | ask | show | jobs | submit login
Go is not an easy language (arp242.net)
487 points by jen20 on Feb 22, 2021 | hide | past | favorite | 431 comments



Barriers to entry are invisible. They are invisible to people on the inside and most frequently invisible to people who have a hand in creating those barriers.

I once had a conversation with a founder about their signup flow. This founder had aspirations to be a global player. They said that step 2 of their flow included entering a credit card number. I had to stop them. "What about people who don't use credit cards?" They were nonplussed. "You know, China, much of Africa ..." they had never actually thought about whether people had credit cards because every single person they interacted with had one.

Back to languages. If you've never taught an introduction to computer programming for a general audience you are blind to what does and does not matter about a language. You look at things like `foo.bar()` and think "Yeah that's a simple method invocation" and have no idea how many people you just lost.

Never underestimate how much ease of use matters. We, as a community, have selected for decades against people who care about language ergonomics because they get hit in the face with a wall of punctuation filled text and turn away. We fight over where braces belong while not realizing how many hundreds of thousands of people we've excluded.

Ease of use is the most important part of a language. It's just that the barriers to entry are invisible and so the people who happen to be well suited to vim vs emacs debates get to talk while the vast majority of potential programmers are left on the outside.

We create barriers to entry accidentally because we design for people like us. To us the barriers are invisible.


Programming has and almost infinite depth to it, depending on the complexity you are tackling/modeling. In a way, it is like mathematics in the way you have layers acting as foundations/abstractions for more complex representation. Introduction to mathematics is counting discrete things, then the number line, addition, which is a foundation for/abstracted by multiplication, which is in turn a foundation for/abstracted by exponents, etc. and in no time, you're at the level of multi-variate calculus and n-dimensional hypercubes.

Each level requires notation that is concise (so it can fit in short-term memory to be actually useful) - one cannot expect graduate-level mathematics to share the same notation/symbols as beginner (elementary) maths.

Programming languages have to trade off which level they are biased towards, I do not believe in a universal language that is both easy for beginners while being concise for advanced users.

Back on topic - Go is not an easy language, it is a simple language, those 2 things are not the same.


Go isn't even simple. They tried to be simple, but mostly managed to only move the complexity around, or foist it on to the user's shoulders by not providing a layered, composeable API in the standard library.

There are some kinds of complexity you can't eliminate; only contain through careful, thoughtful architecture. Unfortunately, people often confuse this with the eliminatable kind, and end up making a bigger and bigger mess as they push things around.


What do you mean by "layered, composeable API"? Go does provide abstractions for common use cases in its standard library (`io.ReadAll` for example) and there are interfaces to connect different parts of the standard library (`io.Reader`/`Writer`, `fs.FS` etc.).


> Go is not an easy language, it is a simple language, those 2 things are not the same.

This distinction is fundamental to any sort of design, but is unfortunately lost on a lot of people (especially developers, in my experience). Easy and simple are near-universally conflated.

> I do not believe in a universal language that is both easy for beginners while being concise for advanced users.

Exactly. I really want some languages to be more intuitive and easy to pick up for non-programmers. The average person (whatever that means) does not have the first clue about the machines that run so many parts of their lives.

At the same time, such a language would likely be a bad fit for most professional software development. That hardly means it's without value.

I know languages like that exist, but they're often aimed at absolute beginners and are treated like toys. There doesn't seem to be much middle ground or "transition languages".

Insert joke about how $LANGUAGE_YOU_DISLIKE is a toy.


> Easy and simple are near-universally conflated.

That’s an interesting statement as it doesn’t always work in other languages. In German (my native language) my first instinct was to translate both words as “einfach” which contains both concepts. In fact, in my online dictionary of choice the word “einfach” is the first entry for both “easy” and “simple”. So if Germans conflate these two it might be because of the language they speak :) But more to the point, I’m wondering how universal the distinction between easy and simple is when other languages cannot express that distinction as easily as in English.


Interesting. My native language, Afrikaans, has a lot of influence from both Dutch and German (as well as a bit from English and quite a bit from Bantu African languages). We say "eenvoudig" for "simple" and "maklik" for "easy". I recognize both the (different) Dutch words Google Translate provides me when I translate to Dutch as similar to the Afrikaans words, but Google Translate translates both to "einfach" when I translate to German. May be German for some reason conflates the meanings. That said, homonyms and homophones that conflate meanings are found in most human languages, and often for meanings that are far easier (or is that simpler ;-)) to distinguish than "simple" and "easy".


If we were to agree that a simple task has low complexity to accomplish, and an easy task requires little energy to accomplish, then conflating them is straightforward, particularly when weighing the mental effort to accomplish the task. (Tying a shoelace is simple: four steps. Tying a shoelace with 5 pound weights on my wrists is still simple, but not easy.) Of course, if you don't agree to these definitions, then the intersection of them is thinner.


I mean, Python is the de facto beginner language and also used for professional software development (and of course intermediate data science work). Are you suggesting this is an unwise or unstable equilibrium?


I'm glad you bring that up. I think Python is the closest we have to a "universal language" (even so, it still has some limitations).

I think it works well for beginners because the language itself is so consistent and they have put a lot of effort into avoiding corner cases and "gotchas". And I think it works for professional uses because of third party library support.

To answer your question: I'm not suggesting that at all. I'm honestly not entirely sure how Python balances it seemingly so well. Given the lack of focus in the industry towards "intermediate" programmers and use cases, my slight fear is that Python will be shoehorned into one direction or the other.

Even if the language itself isn't, it does feel like the use-case-complexity gap is growing exponentially, at times.

And not just with Python. Seemingly, you're either a complete beginner learning conditionals and for-loops or you're scaling out a multi-region platform serving millions of users with many 9's of uptime.


Python does this so well because of the extremely full featured and fairly easy to use C API. Advanced programmers can write extension modules for the interpreter and provide APIs to their C libraries via Python, give their types and functions a basically identical syntax to MATLAB and R, and bang, statisticians, engineers, and scientists can easily migrate from what they already know how to use, pay no performance penalty, but do it in a language that also has web frameworks and ORMs. You can do machine learning research and give your resulting predictive models a web API in the same language.

This gets badly underappreciated. I've been working in Python for a while and honestly, I hate it. I wish I could use Rust for everything I'm doing. I can't stand finding so many errors at runtime that should be caught at build time in a language with static type checking.

But I also recognize the tremendous utility in having a language that can be used for application development but also for numerical computing where static typing isn't really needed because everything is some variant of a floating-point array with n dimensions. Mathematically, your functions should be able to accept and return those no matter what they're doing. All of linear algebra is just tensor transformations and you can model virtually anything that way if you come from a hard engineering background. Want to multiple two vectors? Forget about looping. Just v1 * v2. It will even automatically use SSE instructions. Why is that possible? The language developers themselves didn't provide this functionality. But they provided the building blocks in the form of a C API and operator overloading, that allowed others to add features for them.

So the complaints you typically see about dynamic languages simply don't matter. No static typing? Who cares? Everything is a ndarray. Syntax is unreadable? Not if you're coming from R or MATLAB because the syntax is identical to what you're already used to using. Interpreted languages are slow? Not when you have an interface directly to highly optimized BLAS and ATLAS implementations that have been getting optimized since the 50s and your code is auto-vectorized without you needing to do anything. GIL? It doesn't matter if you can drop directly into C and easily get around it.

Meanwhile, it's also still beginner friendly!

EDIT: I should add, editable installs. That's the one feature I really love as a developer. You can just straight up test your application in a production-like environment, as you're writing it line by line. No need to build or deploy or anything. Technically, you can do this with any interpreted language, but Python builds this feature directly into its bundled package manager.


Great rundown! It's love/hate for me too. Python is the worst language for scientific computing, except for all the others. I think Julia's going to take the crown in a few years though, once the libraries broaden out and they figure out how to cache precompiled binaries, to get the boot time down. With Python, it's not so much that you get to write C, you have to write C to get performance. I'll be interested to see whether Julia takes off for applications beside heavy numerical stuff. That seems to be the Achille's heel of languages designed for math/science applications -- it's easier to write scientific packages inside a general-purpose language than vice versa.


This is hands down the best description I've seen of why so many of us persist in using Python despite the language or runtime. I do hope that more alternative language ecosystems will begin to thrive in the numerical space and that we'll see more ergonomic facilities for generating high performance code from within Python itself.


TLDR; Right now Python is almost always easier for numeric Python beginners than Rust is for numeric Rust beginners and even also more productive. I just don't see Python's ease and productivity advantages remaining if Rust can catch up with Python's ecosystem and toolchain. But we'll have to wait and see if that will happen. And when and if Rust is actually (slightly) more friendly to the numeric computing beginner and much more productive in some numeric/scientific contexts than Python, Python loses its current intermediate language position. Especially if similar improvements happen in other domains.

> Python does this so well because of the extremely full featured and fairly easy to use C API. Advanced programmers can write extension modules for the interpreter and provide APIs to their C libraries via Python, give their types and functions a basically identical syntax to MATLAB and R, and bang, statisticians, engineers, and scientists can easily migrate from what they already know how to use, pay no performance penalty, but do it in a language that also has web frameworks and ORMs. You can do machine learning research and give your resulting predictive models a web API in the same language.

You know what's better than "the extremely full featured and fairly easy to use C API": If your language can itself compete with C/C++ for writing the libraries you need. The only advantage Python has over Rust regarding library ecosystem is the first-mover advantage and that Rust makes obvious how terrible the C API is which means people often invent new Rust libraries rather than reuse the old C libraries. The only advantages Python has over Julia are first-mover and that I doubt Julia-native libraries can truly match highly optimized C/C++/Rust libraries performance-wise in most situations where performance actually even matters.

> But I also recognize the tremendous utility in having a language that can be used for application development but also for numerical computing where static typing isn't really needed because everything is some variant of a floating-point array with n dimensions.

* Some numeric use-cases need to work with more than floating points. May be 2D (complex)/4D/8D numbers, may be dollars, may be durations. You lose all units of measurement and they are often valuable. In Python you cannot indicate "this cannot contain NaN/None". * In an N-D array, N is a (dependent) type, so is the size, so is the shape. Julia got this right but last time I checked it had a nasty tendency to cast the dependent types to `Any` when they got too complex. Imagine if you can replace most `resize` calls with `into` calls and have the compiler verify the few cases you still need resize. In Rust several libraries already use dependent types for these sorts of uses, but lack of important features that are only now starting to approach stable (const generics, GATs) makes them very unergonomic to work with. * I see a lot of complex types that should've been a single column in a dataframe get represented with the full complexity of multi-indexes. Juck! Not only more complex, but far less expressive and more error prone. I haven't yet seen Rust go the extra step and represent a struct as a multi-index to get the best of both worlds, but it's what I would love and Rust definitely has the potential. It's just not a priority yet as we are still just implementing the basics of dataframes first. * Things get even more interesting when you throw in machine learning. As a masters degree student, it took me months (mostly during the summer vacation, so I wasn't going to ask my promoter for help) to figure out the reason I'm getting bogus results is due to a methodological mistake that should have been caught by the type system in a safe language with a well-designed ML library. But here the issue is "safe" and "well designed library" not so much as "statically typed", but a powerful type system is required and the type system would catch the error in milliseconds in stead of hours if the it is static rather than dynamic.

> Forget about looping. Just v1 * v2. It will even automatically use SSE instructions.

Many languages have operator overloading and specialization or polymorphism to enable optimizations. In Rust this is again just a case of libraries providing an optimized implementation with an ergonomic API.

> So the complaints you typically see about dynamic languages simply don't matter. No static typing? Who cares? Everything is a ndarray.

Nope. Everything is not just an ndarray. That often works well enough. But when numeric computing gets more complex, you really want a lot more typing.

> Not when you have an interface directly to highly optimized BLAS and ATLAS implementations that have been getting optimized since the 50s and your code is auto-vectorized without you needing to do anything.

Much of those decades old optimizations are irrelevant or even deoptimizations on modern hardware and with modern workloads. The optimizations needs maintenance. In C/C++ optimizations are very expensive to maintain in Rust we cannot only leapfrog outdated optimizations but also much more cheaply maintain optimizations. Also, as we move into more and more datasets that are medium/large/big (and therefore don't fit into RAM), we're getting more and more optimizations that are very hard to make work over the FFI boundary with Python. The fastest experimental medium data framework at the moment is implemented in Rust and has an incredibly thick wrapper that includes LLVM as a dependency (of the wrapper), since it's basically hot reloading Python and compiling it to a brand new (library-specific) DSL at runtime to get some of the advantages of AOT compilation and to try to fix some of the optimizations that would otherwise be lost across the FFI boundary. Note that means now you need to do a very expensive optimized compile every run, not every release compile of the program, though I guess you can do some caching. Note also that it means maintenance cost of the wrapper quite likely dwarfs maintenance cost of the library implementation which is not a good situation to be in. The fastest currently in production python framework for medium/large data is probably Dask, but to achieve that performance you need to know quite a bit about software engineering and the Dask library implementation and do quite a bit of manual calculations for optimal chunk sizes, optimal use of reindexing, optimal switching back and forth with numpy, etc. and to avoid all the many gotchas where something that expect would work crashes in stead and needs a very complex workaround. In Rust, you can have a much faster library where the compiler handles all of the complexity for you and where everything you think should work actually does work and that library is already available (though not yet production ready).

> Meanwhile, it's also still beginner friendly!

* Is it? I admit it's code (but definitely not its toolchain) is marginally better for programming novices (and that marginally is important). Importantly, remember that novices don't need to learn about borrow checking/pointers/whatever in Rust either and that by the time they're that advanced, they need to worry about it in Python as well but the language provides no tools to help them so in stead of learning concepts, they learn debugging. Rust is lagging in teaching novices mostly due to the lack of REPL and fewer truly novice-friendly documentation, IMHO. * But give Rust a good REPL and more mature library ecosystem and I cannot imagine Python being any more beginner friendly than Rust for numeric computing. (When "everything is just an ndarray of floats" is good enough, the Rust would look identical to the Python (except for the names of the libraries used) but provide better intellisense and package management. When "just an ndarray of floats" isn't good enough, Rust would have your back and help the beginner avoid stupid mistakes Python can't help with or express custom domain types that Python cannot express or at least cannot express without losing the advantages of the library ecosystem.

Don't get me wrong. Right now Python is almost always easier and even also more productive for numeric computing. I just don't see it remaining that way if Rust can catch up with its ecosystem and toolchain. But we'll have to wait and see if that will happen.

I can also think of several other domains where Rust is actually potentially better suited as intermediate language than the competitors: * Rust arguably is already there in embedded if you can get access to and afford a Cortex-M. But I think it might actually be capable of beating mycropython in ease on an arduino one day. (At least for programmers not already experts in Python.) I won't go into my reasoning since this is already getting long. One day embedded Rust might also compete with C in terms of capabilities (the same or better for Rust) and portability (probably not as good as C on legacy hardware but possibly better on new hardware). * I think Rust is already a better language than Go for cloud native infrastructure except for its long compile times and it seems like an increasing number of cloud native infrastructure projects also feel that way. In the meantime new libraries like `lunatic` might be an indication that one day Rust might be able to compete with Go in terms of ease of writing front-ends for beginners. * Looking at what happens in the Rust game libraries space, I think Rust can definitely be a great intermediate language there one day. It already has a library that aims to take on some beginner gamedev/art libraries in languages like JS/processing/Go and at the same time, it has several libraries aiming to be best in class for AAA games.


Python abounds with corner cases and gotchas. It may have fewer than JS/Perl, but that really isn't saying much. It may hide them until a test or real-world use shows you you've stepped on them but that's not always a good thing.


> The average person (whatever that means) does not have the first clue about the machines that run so many parts of their lives.

Sadly, I would argue that this is also true of many developers.


It seems like Basic has been the closest we've come on a general purpose language with "easy to pick up features" (at least in widespread use)?


> Back on topic - Go is not an easy language, it is a simple language, those 2 things are not the same.

I often see this point here, but I always wonder what people mean by it. Could you elaborate on that point?


I think the author's example is pretty good.

"How do you remove an item from an array in Ruby? list.delete_at(i)...Pretty easy, yeah?"

"In Go it’s … less easy; to remove the index i you need to do:"

  list = append(list[:i], list[i+1:]...)
So Go is simple in that it doesn't have shortcut functions for things. There's generally one way to do things, which is simple. But it's not easy, because it's certainly not intuitive that "append" is the way to remove an array element.


Not OP, but I'd be happy to try and differentiate, as well.

Simple means uncomplicated, not a lot of pieces. A violin is arguably simpler than a guitar because of a lack of frets.

Easy means it is not difficult. A guitar is arguably easier than violin [0] because it has frets.

It's important to remember that "easy" is very subjective. What is easy for one person might be insurmountable to another.

tl;dr "simple" usually means "easy to understand" and "easy" usually means "easy to do". Both inherently assume some amount of prior knowledge or skill, so neither is entirely universal or objective.

[0]: I'm not trying to say one instrument is better or disparage and guitarists. It's an example.


The primary source for this is https://www.infoq.com/presentations/Simple-Made-Easy/

(I am not entirely sure I agree with its thesis or its applicability to Go, but since nobody had actually linked you directly to the concept, I thought it would be worthwhile to do so.)


To me, what makes Go simple language is limited set of building blocks, and being very opinionated - this mostly relates to syntax and what the core language provides out of the box. With Go, you only need to understand looping, branching, channels, slices and you're mostly good to go.

Ease is measured by how a language may be used to solve a problem. As an example, text-wrangling with Perl is easy - but you may have to do it using complex (i.e. not simple) representation.

Back to Go - channels are a simple concept, but are not (always) easy to work with if you do not put a lot of thought into your concurrency model.

Edit: I just thought of another way to express the difference between "simple" and "easy". The notation of adding "1+1=2" is simple, however proving that "1+1=2" is not easy (at least at the level of elementary students).


> "What about people who don't use credit cards?"

> "You know, China, much of Africa ..."

Oh, you don't even have to go that far. Of all the people I know maybe 1 out of 10 has a credit card. The irony in regard to your post: Most of them got one because "it was needed for something online" at some point. This is in Germany.


Maybe your sample is really biased, but the actual credit card ownership rate in Germany in 2017 was 53% and growing YoY, so it's likely quite a bit higher by now.

https://www.statista.com/statistics/865943/credit-card-owner...

You may need to copy & paste the title of that page into Google and access the page from there to see the data.


A programming language is a tool. I don’t think it should really be optimized or designed around how easy it is to teach someone who is at step zero, basic programming concepts. I do think Go is a decent language to teach up and coming professional devs precisely because it doesn’t hide the real complexity of what’s going on, but I’d probably opt for something a bit higher level as the absolute intro to programming.


I've been coding for 10+ years in primarily ruby and python. I switched to golang recently for work. While its an interesting language, it takes forever to express what I want the code to do, unlike the previous languages.

Go's simplicity forces developers to introduce insane amounts of complexity.


After like 9 months of Go I am already starting to leave it behind for Rust, but one thing Rust still really lags behind Go in is the maturity and feature completeness of major libraries.

The main example in my mind is how powerful Cobra/Viper is to create CLI apps where config can come from multiple sources - files, flags, env vars. To do the same in Rust you need to write a lot of your own code and glue together several libraries.

There's also nothing I can find for Rust that can do database migrations as nicely as Goose for go. Diesel can create its own migrations, but that's only if you're using Diesel and I prefer SQLX over an ORM and Diesel isn't async yet


Ruby and Python are notorious for being difficult to read because of all the foot guns in place. Monkey patching, duck typing, metaprogramming... and that's not even talking about structural limitations like the GIL.

Go is definitely more verbose and less "fun" to write, but it's 10X easier to reason what's going on in a large application.

Of course, it's also the correct kind of solution for some types of problems. If you need a compiled language (many do), the competition to Go isn't Ruby and Python, it's C++ and Rust.


Or Nim or Zig or ...

Actually, I feel like Zig may be most in the compiled-but-keep-it-simple-like-C headspace as Go. I don't know either Go or Zig super well. From what I do know, Zig seems not quite as stubborn as Go about forcing code to be "simple to reason about" (which is also subjective, though maybe less so than "easy to do things").


I would say Zig is interesting in that "safe things are easy and pretty", "unsafe things are difficult and ugly", thus drawing eyes to the code that needs it. It's four lines of nasty looking code for me to convert a slice of bytes to a slice of (f64/f32/i32)... Which the compiler no-ops. This is dangerous because the alignment and byte count must be correct.


I would say both of those are interesting languages, but lack the amazing standard lib of Go and definitely the tooling and industry support.

Personally, I find Swift to be a really great language, but can't deal with the Apple eco-system, XCode, raw Linux support...


Totally agree Re: Swift drawbacks.

Nim's stdlib is actually quite large (like 300 modules). I am sure there are things in Go's that are not in Nim's (like Go's `big`), but I am equally sure there are things in Nim's that are not in Go's (like Nim's `critbits`, `pegs`, etc).

I doubt there is much in Go's stdlib not available in Nim's nimble package system, but OTOH, I am also sure the Go ecosystem is far more vast. I just didn't want people left with the impression Nim's stdlib is as small as Zig's which it is definitely not. Nim has web server/client things and ftp/smtp clients, json and SQL parsers, etc.


> If you need a compiled language

I would wager jvm/clr are probably the main competition for Go at least for product/tech companies and enterprise IT.


Definitely for server-side software, but not for binaries that need to be distributed to end users.


Python is widely considered to be very expressive and very readable.

You can write unreadable code in virtually any language but that's besides the point.


Considered "very readable" by whom? It's pretty well accepted that dynamic/weakly typed languages are more difficult to deal with at scale.


Python is dynamically but strongly typed. “2+True” is a type error; once something has a type, it has a type. There’s no WAT.


Python is more strongly typed than JS/Perl, granted. But it is still very weakly typed overall. Here are some examples:

1. if list(): pass # implicit coercion from collection -> None -> bool. (Very uncommon weak typing and a terrible idea.)

2. a = 123; b = 4.5; c = a + b # implicit coercion from int -> float (Common but not universal weak typing. More often hides bugs than helps with ergonomics and readability but sometimes a worthwhile tradeoff.)

3. a = 1 + false # implicit coercion from bool to int (Common weak typing in scientific languages (for masking) and older C-family languages (for bit twiddling). However, that's still bad language design. Libraries/syntax sugar should special case masking and bit twiddling. You should not have global coersion between bool and int.)

4. etc.


Variables can change type though. This is working python code:

  x = 1
  y = 2
  print(x + y)

  x = "no "
  y = "types"
  print(x + y)
Whereas this will not even compile in Go:

    x := 1
    y := 2
    x = "no"
    y = "types"


That's the difference between static and dynamic typing, not the difference between weak and strong. The values cannot be used as if they were another type.


It has nothing to do with the difference between strong and weak typing and also has nothing to do with the difference between weak and strong typing. It is about Python not disambiguating between variable shadowing and variable reassignment.

Here's a comment of mine that explains why Python is weakly typed: https://news.ycombinator.com/item?id=26354039.


So what's the type signature of `print()` then?

I do think my example code demonstrates why Go is easier to reason at scale than Python. Python is conflating assignment and type declaration. Go has = and :=, so it's crystal clear what's going on.


> So what's the type signature of `print()` then?

I like the way Rust makes clear what are the different possibilities:

In Rust we use a macro to handle variadic arguments, but if it was just a case of supporting different types, not of supporting an arbitrary number of arguments we would've had three options for the type signature:

1. Monomorphic duck typing: `fn print(arg: &impl Debug)`. Here the compiler simply generates multiple print functions, one for every type the function gets called on. The exact concrete type is known at compile time.

2. Polymorphic dynamic dispatch (with dynamic sizing) duck typing: `fn print(arg: Box<dyn<Debug>>)`. Here the compiler generates a vtable and allocates on the heap. Only a type approximation is known at compile time, not the exact concrete type, but it still counts as static typing.

3. Dynamic typing: `fn print(arg: Box<dyn<Any>>)`. Note `Any` in stead of `Debug`. Full dynamic typing with the type completely unknown at compile time. Juck! But occasionally useful for prototyping or for FFI with dynamically typed languages.


> what's the type signature of `print()` then?

Dynamic typing doesn't rule out polymorphism.

    void print(PyObject objects...)
where `PyObject` is a base type.

Additionally, you could perfectly well have constants, or require variables to have a fixed type, in a dynamic language. You would just pay a cost at runtime to check the type on assignment.


I would argue that you pay the cost at runtime but you also pay a cognitive overhead cost while writing in a dynamically typed language. Refactoring in particular is a lot more difficult.


I agree. But this particular example ironically has nothing to do with dynamic typing.


This has nothing to do with either dynamic or weak typing in Python. It's just Python not disambiguating between shadowing and reassignment.

Here's a comment of mine that explains why Python is weakly typed: https://news.ycombinator.com/item?id=26354039.


Bad example, booleans in python are integer subclasses since forever... 2 + True = 3. Also, False in [1, 2, 0] evaluated to True. But you're still right saying python is strongly typed.


Companies use golang for web services, which directly competes with Ruby and Python.


And Java, which (I feel like) is used for bigger, long-term projects.


> the competition to Go isn't Ruby and Python, it's C++ and Rust.

Crystal is a pretty solid alternative to golang.


I don't think you can write that much of a rant using the term "ease of use" without explaining your view on what "ease of use" is; does "use" really only mean a beginner using it for the first time? You mention languages with punctuation, what alternatives do you see instead? You claim "ease of use is THE most important part of a language", but why is it the most important?


> Back to languages. If you've never taught an introduction to computer programming for a general audience you are blind to what does and does not matter about a language. You look at things like `foo.bar()` and think "Yeah that's a simple method invocation" and have no idea how many people you just lost.

I keenly remember in college when they were first starting to teach me C++ and I asked something like “Ok: I hear what you’re saying that this is a function, and that’s a parameter, but my question is how does the computer know that you’ve named it [whatever the variable name was]?

The teacher had no understanding that this was a conceptual barrier.

Of course now I know that the answer is “because order of the syntax tells it so“, but stuff like that made those classes much harder than they needed to be.


> Ease of use is the most important part of a language.

It's really a matter of who your target audience is and what they're trying to achieve, though, isn't it? You might make a language easy for the whole world to use, but simultaneously make it hard for specific tasks. Likewise, a language might be easy to use for experts who are trying to achieve a specific task, but difficult for newbies. That's totally okay.

BASIC was super easy to understand and helped me get into programming, but there's no way I would use it for anything serious today.


I feel completely the opposite. While I do agree that we should make the barrier to entry as low as possible. A lot of times the barrier to entry is in conflict with the usefulness of a tool. We should lower the barrier to entry without losing any usefulness and not any further.


> Barriers to entry are invisible. They are invisible to people on the inside and most frequently invisible to people who have a hand in creating those barriers.

I just want to say that I really, really like this phrasing.

I have a lot of opinions on how programming languages could be improved, and however much I disagree with Rob Pike on types, I still think Go hit a real sweet spot.


> You know, China

Another thing I find very interesting is Go is very popular in China

https://blog.jetbrains.com/go/2021/02/03/the-state-of-go/

the US ranked #7, it barely had more devs than say, Belarus.


But the metric was credit cards in China


Going by their first example, the fact that I couldn't write

   list.delete(value)
without realizing it involves a linear search is considered one of the benefits of Go.

That said, I agree Go could use a bit for ergonomic and handy methods, while still remaining efficient.

The example on Concurrency also has a different answer. Go gives you all the tools, but there are many different ways their problem can be solved. By explicitly avoiding a "join" method and having any default 'channels', Go makes all these possible. What is missing is some of the popular patterns that emerged in the last few years need to make it to into stdlib or some cookbooks.

1. If you want a worker model, I would start n=3 go-routines and they all receive from a single channel. They don't need to much around with a channel of buffer 3, as in the example.

2. If the workers already return data from a compute, the read from driver serves as the wait.

3. In other cases, there is sync.Waitgroup available to synchronize completion status.

4. End of work from the driver can be indicated via a channel close. Closed channels can still be read from until they are empty and the reader can even detect end-of-channel.

Designing concurrent system is a bit complicated. Some tutorials do make it sound like a `go` keyword is all you need. All of these can fixed by improving std-lib or cookbooks.


> Going by their first example, the fact that I couldn't write list.delete(value) without realizing it involves a linear search is considered one of the benefits of Go.

I think this is mostly okay. Most experienced programmers will realize this is a linear search and that it may be slow on large arrays. And turns out that in the overwhelming majority of the cases that's just fine!

For other cases it's not-so-fine, but the mere presence of "list.delete" doesn't really stand in the way of implementing another, more efficient, solution.

Overall, I certainly think it's better than implementing the same linear searches all the time yourself!


And it's not exactly gatekeeping to think that anyone who uses a method on any data structure on their platform should know the (rough) complexity of it (At least in the family constant, linear, or better/worse than linear). Removing a value from an array-backed list that isn't at the end is usually not a good idea.

My main problem with having "remove items by value" in standard library apis is that it assumes a notion of value equality. Having that notion inserted at the very "bottom" (Like Equals/GetHashCode in .NET for example) is a mistake that has caused an infinite amount of tears.

I much prefer this situation where the user must provide his own implementation and think about equality. It's boilerplate, but the boilerplate is useful here.


> I much prefer this situation where the user must provide his own implementation and think about equality.

But that doesn’t scale beyond the first user. Every subsequent developer now needs to read implementations to understand what the code does thanks to a lack of standardization for these functions. Consider if there was a smaller standard library with fewer interfaces and conventions, it would become a lot harder to understand a number of concepts in Go, by design. That’s fine, but conventions are what made Ruby on Rails projects so successful that they scaled up to being the monolith monstrosities most of us with startup experience ended up knowing them to be.

Note that I’m suggesting something akin to C++’s standard library where algorithms are already written for you to use and compose with. Yes, the drawbacks are a slower compile time, and some conventions like constexpr can really complicate things, but… I can’t say that a larger standard library or a larger set of conventions would make Go harder to use assuming the implementations hide a sufficient amount of complexity to outweigh the overhead of learning of them in the first place.

What functions provide more value than the overhead required to learn them? Delete is one such function, mutable immutable data structures generally are likely another. Yes, the documentation needs to specify big O complexity, but it can still be easy to read. For example: https://immutable-js.github.io/immutable-js/docs/#/List

The only way to get easier than that would be to use the same syntax for immutable operations as mutable ones: https://immerjs.github.io/immer/docs/introduction

I recognize that the Go community finds the built-in standard library restrictive now, but that’s no reason not to support a versioned standard library that ships separately but contains common functionality. I can only point to TypeScript for how such a system might work, given the large community project that exists to provide and publish types for that language, without actually publishing them with the language or compiler itself, excluding browser dom etc.


> But that doesn’t scale beyond the first user. Every subsequent developer now needs to read implementations to understand what the code does thanks to a lack of standardization for these functions.

There is no standardization. It’s a hidden piece of logic that developers slowly and painfully understand.

If I you have to pass an equality function when removing an item from a list by value (or equivalently when creating a set or dictionary) it would always be explicit. That doesn’t mean it can’t be standardized. A framework can provide typical implementations (and often does!) such as “ReferenceEquals” or “StringComparison.Ordinal” etc.

Another unfortunate side effect of the “equality as a property of the type” is that you can only have one such equality. And if you are passed a Set of dogs you still can’t know for sure whether the equality used is actually the one declared in the Dog type of at Set creation (or possibly in the Animal base class). It’s a mess. And it’s so easy to avoid - the default virtual equality is simply not necessary.


This is why I tend to like the example shown by the Immutable javascript libraries. They implement find-by-value using a library-specific calculation of equality built on primitives, but if you need something custom, you can pass predicate functions to, for example, perform your own equality check.

I think we're agreeing here, despite the initial disagreement. Standards don't have to solve every edge case, nor do they have to integrate with existing language patterns such as "equality as a property of the type" though a standard function could have multiple variants. The Go way of doing this might be like how regular expression function names are built.

Also, the library doesn't have to implement delete that way. If functional programming and predicate functions are an encouraged design pattern, you could replace delete-by-value functions with a filter function which could clearly indicate what happens if more than one value is found as well as how the equality check is performed but glosses over using slices to update the array. Some might say it's slower https://medium.com/@habibridho/here-is-why-no-one-write-gene... but it doesn't have to be slower, it's just tradeoffs in how the go compiler and language currently works vs could work.


Why would introducing value equality of all things be a problem? I think the opposite is true: many languages force you to write error-prone boilerplate because they lack a good definition of value equality built into the language and ecosystem.

Go in particular is worse on this front than any language more high-level than C. It defines equality for a very limited subset of built-in types, with no way to extend that notion of equality to anything that is not covered; nor any way to override the default equality assumptions. This makes it extremely painful whenever you want to do something even slightly advanced, such as creating a map with a struct as key when that struct has a pointer field .

And since pointers have lots of overloaded uses in Go, this turns a potentially small optimization (remember this field by pointer to avoid creating a copy) to a mammoth rewrite (we need to touch all code which was storing these as map keys).


I’m sorry for the “gatekeeping”, but do you really want to work together with someone that doesn’t know the language’s standard library or won’t even look at the documentation? And instead it would be positive that he/she writes the very implementation of something?


This problem transcends documentation of any given language's standard library. list.delete(value) in any programming language is a degenerate without an ordering or hash built-in to the underlying data structure.

If you ever need to delete based on value, it's a smell that a list is the wrong data structure. I'm sure there are cases in constrained environments where a linear search is heuristically okay, but generally in application development list.delete(value) is a hint that you're using the wrong data structure.


Good point. I guess the reason the author finds removing from lists to be relevant is because there is no Set structure in the Go standard library. They should use a `map[t]bool`, but that is non-obvious to many programmers.


My Google search suggests that maps in Go aren’t ordered. If so this doesn’t work in place of an array.


`map[t]struct{}` saves you a few bytes per entry. Just use the `_,found := foo[key]` form of lookup


Unfortunately, map[t]bool (or map[t]struct{}) only works for special structs that happen to have equality defined for them. It's not a general purpose solution, it's just a hack.


map[t]struct{} so that the values don't require any space.


> Most experienced programmers will realize this is a linear search and that it may be slow on large arrays.

Experienced programmer can think that values in a list with such operation are indexed by a hash and delete by value is O(1) unless there is a big warning in documentation. Novice programmer probably haven’t seen such combination of an array and a hash table and will assume O(N) as taught in a college.


Experienced programmers, if perf matters to their project, should think about their data structures, memory size, and access patterns. It's not clear why an experienced programmer would think an array defaults to wasting memory on a hash table. Why use an array, if they need lots of values and to delete them at random?


Experienced programmers will know that such a thing is possible, but I've never seen any (non-JavaScript, non-PHP) language where primitive arrays involve a hash table.


And even in JS and PHP (and Lua) where arrays and hash tables are technically the same data structure, a hash table used as an array isn't actually indexed on elements, just numbers, so you don't get O(1) search.


Experienced programmer will know whether the vanilla array of the language they use feature this kind of convoluted data structure.

Which, for virtually all languages, is “obviously not”.


Developers in other languages know the performance characteristics of list.delete because there is only one implementation throughout 99% of their codebase. Go you have to look at the implementation every single time to make sure its doing the right thing.


Haskell documentation uses the Big O notation everywhere which I really love. I wish Go and many other languages did the same.

An example would be https://hackage.haskell.org/package/containers-0.4.0.0/docs/....

  size :: Map k a -> IntSource
  O(1). The number of elements in the map.

  member :: Ord k => k -> Map k a -> BoolSource
  O(log n). Is the key a member of the map? See also notMember.

  lookup :: Ord k => k -> Map k a -> Maybe aSource
  O(log n). Lookup the value at a key in the map.
And so forth...


Redis does the same; e.g. [1]:

> Time complexity: O(N) where N is the number of elements to traverse to get to the element at index. This makes asking for the first or the last element of the list O(1).

I agree this is something more documentations should do when possible; it doesn't even have to be big-O notation as far as I'm concerned, just a "this will iterate over all keys" will be fine.

Ruby's delete() doesn't mention any of this, although you can easily see the (C) implementation in the documentation[2]. In principle at least, this doesn't have to be O(n) if the underlying implementation would be a hash for example, which of course has its own downsides but something like PHP may actually do this with their arrays as they're kind of a mixed data structure? Not sure.

[1]: from https://redis.io/commands/lindex

[2]: https://ruby-doc.org/core-3.0.0/Array.html#method-i-delete


Redis's documentation is wonderful, I wish every system I use were documented even half as well.


In the C++ standard the complexity of algorithms is a requirement for a compliant implementation. Many text books skip it, but better documentation references it. Taking the "delete element from list" example from this thread https://en.cppreference.com/w/cpp/container/list/remove states that there is a linear search.


Andrei Alexandrescu has a scheme to encode these in the D typesystem on his website. It didn't get merged into the standard library in the end but you can happily do it in your own code


My experience is that developers in other languages have no idea of the performance characteristics of list.delete. It seems low-level and therefore obviously fast.


"Fancy algorithms are slow when n is small, and n is usually small." Rob Pike, creator of Go.


Not at the scale of his employer.


You don't need to be Google scale. Google scale is just "medium large startup" scale replicated across a zillion regions. Gmail has an enormous amount of data behind it, but it's not like calling foo.sort() is going to be iterating over billions of accounts.


But isn't usually when using lists performance is dead anyway (due to cache misses).


"List" is usually a generic term for any data structure that can hold many items, and that has an API for adding or removing items. This can be implemented using an array, a linked-list, a hasthable, or a more advanced data structure.

Also, in languages with GCs, linked lists do not necessarily cause more cache misses than arrays, due to the way allocation works (this is especially true if you have a compacting GC, but bump-pointer allocation often makes it work even when you don't).


Linked-lists, maybe, but `list`s in Python are just resizable arrays.


resizable arrays of pointers to objects stored elsewhere, which is going to blow through your cache anyway, especially when you start accessing those objects' attributes. so it's more a memory management strategy than a performance optimization.


But when removing an item it’s not necessary to visit every element, just re-arrange the pointers (and in CPython, modify the reference count of the item that was removed).


When removing an item by value (being the use case in this thread), you do need to visit the elements to check the value.


Yes but the problem there is not that your list is backed by a linked list or an array, it's that every single value is boxed and needs dereferencing. The difference between no hops and one hop totally trumps one hop vs two hops.


You can write a list data structure with performant 'delete' properties if you're willing to maintain a sort or a hash table. There would be people here bitching about memory usage if the stdlib did that natively. Here's a solution: don't use list.delete. You're using the wrong data structure if that's your solution to whatever collection you're maintaining.


Downvotes on comp sci 102. Nice.


We have this saying in my country "if grandma had a mustache she'd be a grandpa".


I like my country's version better: "if grandma had wheels she'd be a bicycle".


Which language/country is this?

I will use this in English regardless. I like it.


Mexican Spanish. I don't know if they also use that saying in other Spanish speaking countries besides Mexico. In Spanish it goes: "Si mi abuelita tuviera ruedas, sería bicicleta".


If your aunt had nuts she'd be your uncle


That has the same basic idea as the mustache version, but is pithier. I still prefer the surrealism of the bicycle.


You'd probably do better on HN to substitute 'decrying' for 'bitching'.


Noted, thanks. Didn't realize that was the issue.


Yes!

Perfect example: Datetimes. In Golang, if you want to convert a string to a datetime (or vice versa), you _need_ to look at the godoc for datetime because it uses very specific permutations of "Jan 2, 2006" that you _have_ to include in your code. This is much more confusing than how this would be done in Python (provide the format of the date using ISO8601 notation) or Ruby (provide the string, get a Datetime, done).


> without realizing it involves a linear search is considered one of the benefits of Go.

The primary problem that I think the author was trying to get at is that go lacks tools for building abstractions. In a language with generic functions (for instance), it would be possible to implement a delete_at function completely in user code. The fact that you cannot reflects poorly on the language.

Of course it will always be possible to write functions that have arbitrarily slow time complexities. That's something you have to be aware of, with any function; it's not a reason to avoid including a linear-time function to remove an element at a given index of an array.


That’s a deliberate choice in many ways, and it means you don’t end up building castles in the air, or worse, working within someone else’s castle in the air. See the lack of inheritance for a good example of a missing abstraction which improves the language.

Abstractions are pretty and distracting but they are not why we write code and each one has a cost, which is often not apparent when introduced.

That’s not to say Go is perfect, there are a few little abstractions it would IMO be improved by having and the slice handling in particular could be more elegant and based on extending slice types or a slices package rather than built in generic functions.


I don’t see why a well-built standard library would be in the way. In the case of delete, well write your own linear search if you are into, though it gets old imo fast. But will you also write a sort algorithm by hand? Because I am fairly sure it won’t be competitive with a properly written one. Also, what about function composition? Will you write a complicated for loop with ifs and whatnot when a filter, foldl and friends would be much more readable?


Go actually does contain a sort package in the stdlib: https://golang.org/pkg/sort/ with the most ergonomic function being sort.Slice

As for a complicated for loop instead of the functional equivalent: I too am a fan of functional patterns in this case, but the for loop equivalent in my opinion is just as readable in 99% of cases. A little more verbose, that's all. Matter of taste.


Seems to me there are a lot of different types of collections which all have a delete operation. Feels like go forcing you to hand roll a delete forces you to violate the principal that a bit of code should care as little as possible about things it's not about.

If code breaks because list is now a hashmap, that seems like an anti-feature.


> the principal that a bit of code should care as little as possible about things it's not about

Can you find/quote where that principle comes from? I'm genuinely curious.


I think it is mostly based on OOP paradigm, like principle of least knowledge.


It's a motivation for things like OOP, Generics, and macro's.


I agree filtering and many generic operations on slices would be nice, I think they have plans for it once generics lands. I don’t agree they are huge problems though, they are much simpler than sorting algorithms for example (which are included).

A range vs a series of filtering operations is an interesting question in terms of which is easiest to write or, crucially, for a beginner to understand after writing - not convinced foldl is easy for beginners. If you like abstractions like that, Go will be abhorrent to you, and that’s ok.

It is easy to build your own filter on a specific type, I’ve only done it a couple of times, it is literally no more than a for loop/range. Where I have written extensions and would appreciate helpers in the std lib is things like lists of ints and strings - contains, filter and friends would be welcome there in the stdlib (I use my own at present).

For filter etc I think I’d use them but am not convinced they would change the code much - the complexity is in what you do to filter not the filtering operation itself which probably adds about 3 lines and is very simple (set up new list, add to list in range, return list). For example in existing code I have lots of operations on lists of items which sit behind a function and for hardly any of them would I delete the function and just use filter in place at the call site, because I have the function to hide the complexity of the filtering itself (all those if conditions), which would not go away. I wouldn’t even bother rewriting the range to use filter because it’s not much clearer.


> and it means you don’t end up building castles in the air, or worse, working within someone else’s castle in the air. See the lack of inheritance for a good example of a missing abstraction which improves the language.

Very well said. If only more developers would understand this.

Yes I know, creating interfaces, inheriting everything from the void, adding as large amount of layers to create the beautifully constructed reusable structure is a nice drive but it might happen no one will reuse it as doesn't want to dive itself trough infinite level lasagna with rolling ravioli around it (and - thanks god, go doesn't have try/catch - throwing exceptions trough all the layers to force you to go down the guts of the construct to figure out what it means). Or, as it often happens, will be forced to reuse it, complicating his life and is code.

I do understand that inheritance, even operator overloading has its meaning and is extremely useful. But it has another side - everybody are overdoing it as "might come handy later" and the code becomes a case for "How to write unmaintainable code"[2].

When I am coding I am not doing it for philosophical reasons and go has until now succeeding being a very helpful tool without much of "meaning of life" complications. And i would love to see it stay that way.

If you are forcing me to read the documentation / code (presumably I know what I am trying to solve) to be able to use the "beautiful oo construct" and forcing you to read your beautiful design, you have failed making it and I have seen it in java everywhere. I just hope the same people making everything complicated more than it need to be wont skip from java[1] train to go train. I really don't want them anywhere close.

[1]https://github.com/EnterpriseQualityCoding/FizzBuzzEnterpris...

[2]https://github.com/Droogans/unmaintainable-code (and 20 others)


Yeah, this. I always found myself creating beautiful OO constructs that were invariably overfitted to my understanding of the state of the problem at that time, and were really hard work to change once I understood the problem better (or it changed).


Yeah I’ve fallen into this trap and worst of all, I’ve had my team fall into it and not find their way out.

Abstractions are a great idea, and Go has plenty of these (io.Writer is ubiquitous), but most business logic is dead boring and doesn’t need it.


The advantage here isn’t that it’s ergonomic, it’s that if I get an “unknown” array I have no special knowledge about, I can simply use list.delete instead of having to write a linear delete and sleep soundly in the knowledge that it’ll have well-known performance characteristics for that case. If I know something special about an array, I’m still free to write my own deletion algorithm, and nobody would fault you for that.


> What is missing is some of the popular patterns that emerged in the last few years

Erlang has had these "popular patterns" for decades. And all languages that ignore them, making devs reimplement them, poorly, with third-party libs and ad-hoc solutions.


It has been said that Go4 software design patterns are signs of language defects.

Most all of them can be implemented as single lines of code in Python.


Most languages codify common idioms from earlier ones; I don’t see how most of the patterns are any different.

I’m sure there were a lot of assembly programs that used the “call stack pattern”. I’m sure there are a lot of C programs that abuse struct packing to do inheritance (the “class pattern”, sometimes with crappy vtables) or the preprocessor to do “templates”. And even in Python, you’ve got to use the visitor pattern due to single dispatch.


I'm just learning Go myself, where can I learn or reference these popular patterns that have emerged over the past few years?


What comes handy is this [1] link, but I will update if I get any better links or others might chime in.

[1] https://blog.golang.org/pipelines

It is a very long doc, but that also shows that concurrency has so many patterns one might like.

My own pattern is typically

1. Decide level of parallelism ahead of time and start workers (that many `go X()` invocations 2. Setup sync.Waitgroup for the same count 3. Create two channels, one for each direction. 4. Job itself needs some sort of struct to hold details

Most of my jobs don't support mid-work cancelation, so I don't bother with anything else.


Your 1. and 2. are merged in the almost-stdlib package errgroup: https://pkg.go.dev/golang.org/x/sync/errgroup.

It also uses context (useful for long running, concurrent jobs) and handles mid-work cancellation


Study the generic reader/writer implementations in the io module. (On my system, those sources are in /usr/lib/go/src/io.) The io.Reader and io.Writer interfaces are very simple, but very powerful because of how they allow composition. A shell pipeline like `cat somefile.dat | base64 -d | gzip -d | jq .` can be quite directly translated into chained io.Readers and io.Writers.

Another example of this is how HTTP middlewares chain together, see for example all the middlewares in https://github.com/gorilla/handlers. All of these exhibit one particular quality of idiomatic Go code: a preference for composition over inheritance.

Another quality of idiomatic Go code is that concurrent algorithms prefer channels over locking mechanisms (unless the performance penalty of using channels is too severe). I don't have immediate examples coming to mind on this one though, since the use of channels and mutexes tends to be quite intertwined with the algorithm in question.



This looks like some C++ or Java developer trying to shoehorn OOP concepts into a language that is not OOP-first. Just to pick out one particularly egregious example:

> Registry: Keep track of all subclasses of a given class

Go does not have classes.


Since simpler languages are easier to work with, let's just remove goroutines and channels from Go. Then all these problems will just go away.


It's okay. They will add generics soon, someone will immideately write a generic array.Delete() function, it will klog compiling and linking, and in couple more iterations we will have another JVM.


Linear search if it's not sorted. If it's sorted binary search will be much faster. So you need a flag for the function to tell it to use one of the two. This is why writing generic functions can be tricky. Programmers may use something inefficient just because it makes their lives easier.


linear search can be faster than binary search for small arrays, depending on your processor architecture.

writing generic functions is difficult, so it's nice if a language allows people to do so, otherwise you get N inefficient and/or buggy reimplementations of the function in every project needing it. (not sure if that is your point)


Linear search is also quite a bit faster than maps for small arrays (in Go at least, last time I tested it).



my point was that the author uses an example like search and called go not an easy language. so i was trying to give an example where it's not necessary a linear search or search is trivial ...


If your developers hand write a binary search to delete an element you are extremely likely to end up with bugs. So it would be nice if there was a generic binary search too.


That Go doesn’t have the equivalent of `std::lower_bound` is pretty ridiculous.


Binary search won’t be any faster (in big O) because it’s backed by an array: you’ll still need to copy every element after the removed one from slot n to n-1.


Does Ruby track that a list is already sorted? If not, you are still adding cognitive overhead for the programmer to track it


You can sort by many things. Knowing where you sort (usually as far up the chain as possible) is important


Yes. That very example shows why itis hard to write fast program in Ruby. It's not even the interpreter (which is slow), but it's that Ruby is pure magic.

I was never able to understand what happens, when I looked at a particular snippet of Ruby code, if it was not me who wrote it. With Go, understanding others people code is a trivial task.


What part of

    list.delete(x)
Is hard to understand if you didn’t write it?


If we're still talking about Ruby here: List could be _anything_. It might be a list of numbers, it can be a dataset in a remote server, it can be a web page parsed into a list of sentences, it might be list of people in Active Directory.

You literally can't know what's going on under the hood without popping it and looking for yourself.

Ruby's infatuation with clever magic code is what turned me off it years ago. You can get up to 80% in 20% of the time compared to other languages, but then you spend the 80% of time you have left fighting against the magic to get the last 20% done properly. There's way too much stuff You Just Need To Know.


I don’t think that example tells the whole story. You have to zoom out and consider what code based look like when this trade off is made repeatedly by devs. When “easyness” or DRYness becomes king you get ravioli code, and it becomes unintelligible. Keep it simple


My persistent impression of the Ruby ecosystem is that everyone optimizes for clever one-liners so hard that all other priorities go out the window.


As much as I don't personally enjoy writing Go. This is a huge benefit. There's a decent amount of conformity around the "right way" of doing things. I'll admit, it has definitely made me consider my approach to solving problems in other languages much simpler. I'm not back to writing loops in JavaScript when I use it. It really bugs the functional programming folks but... no one can argue that it's readable, and it's pretty fast.


But no one said Go was easy. It's just simple and easy to reason about. There's a huge benefit to something doing exactly what it says. No hidden behaviours, no overly complex syntax, nothing like that.

I came from C# where I have gotten increasingly frustrated with the amount of additions to the language. Like we now have classes, structs and records. We now have async/await but we also have channels. This is what having a huge toolbox causes. Some people are going to write code using channels, me others will use async. Some people will use features that others don't care about.

I think there's huge benefit in a simple language. In Go I know that there's only 2 ways to do concurrency. You either use go routines and mutexes or you use go routines and channels.

Generics will bring a lot of helper functions that weren't possible before which will remove a lot of repetitive code.

But otherwise I am super happy with writing Go code. All my pet peeves of something like modern Java or C# are gone.


> But no one said Go was easy.

Lot's of people have said go is easy. In particular that go code is "easy to read" is an oft-cited benefit of go.

Also, if you go to golang.org and the first thing you see is:

> Go is an open source programming language that makes it easy to build simple, reliable, and efficient software.

(emphasis mine)


Go is easy != easy to read.

I personally do think Go is easy to read. The fact that I don't get bitten in the ass by hidden behaviours and there's no inheritance to step through 5 files of. The fact that everyone's Go code looks more or less the same (unless you're an absolute beginnner) because there's only so many ways to do something. Compared to C# where reading someone else's code means I have to take a shot to calm my nerves and then take a guess at which permutation of language features they decided to use today.

A simple language is easier to read. This doesn't mean it's going to be easy for anyone who isn't a Go developer. There's no claims of that. But given a week to understand syntax and getting used to reading files, I don't think there's many places where you'd get stumped.

>Go is an open source programming language that makes it easy to build simple, reliable, and efficient software.

This still holds true for me. A simple language builds simple software. No noob Go dev is going to have a good time, but that's true in almost every language. Once you understand how everything works, how interfaces work, how channels work, you can simplify down most problems and build them with ease. Needing to write a few extra lines of code doesn't mean you can't write simple, reliable and efficient software.

Neither of those quotes say Go is an all round easy language. It's not like you can throw it at a baby and get back Kubernetes. It's not easy to do custom containers, it's not easy to do manipulate lists, it's not easy to do a bunch of things you'd do in one line in something like C#. But I think these are small hurdles for the mental burden you're relieved of when you see that the rest of the language is equally as simple and benefits from it.


As someone who reads a lot more Go than I write, Go is awful to read.

Reading Go means that I keep having to read idioms (as posted in the article) and parse them back into the actual intent, rather than just reading the intent directly. But I can't even trust that, since the nth rewrite of the idiom might have screwed it up subtly.

Reading Go means that I can't easily navigate to the definition of a function, because it implicitly merges folders into a single namespace, so I have to try them one by one.

Reading Go means that I usually don't have tooling that can jump to definition, because there are umpteen different dependency management tools, and Gopls only supports one of them.

Reading Go means that even when I have tooling available, and it feels like working today, jump-to-definition becomes impossible as soon as interfaces are involved, because structural typing makes it impossible to know what is an intentional implementation of that interface.


>Reading Go means that I can't easily navigate to the definition of a function, because it implicitly merges folders into a single namespace, so I have to try them one by one.

Not sure what you mean here

>Reading Go means that I usually don't have tooling that can jump to definition, because there are umpteen different dependency management tools, and Gopls only supports one of them.

There's 1 dependency management tool though. It's go modules. Unless you live 2 years in the past?

>Reading Go means that even when I have tooling available, and it feels like working today, jump-to-definition becomes impossible as soon as interfaces are involved, because structural typing makes it impossible to know what is an intentional implementation of that interface.

You just right click in VS Code and click "Go to implementations". Or if you use Goland it's right there on the gutter.

Reading comments like this really makes me wonder if anyone complaining about Go has actually used it.


> Not sure what you mean here

In Rust, if I see a call to `foo::bar::baz()`, I instantly know that `foo/bar.rs` or `foo/bar/mod.rs` (and only one will ever exist) contains either a `fn baz()` (in which case I'm done) or a `pub use spam::baz;` (in which case I can follow the same algorithm again).

In Go, if I see a call to `import ("foo/bar"); bar.baz()`, all I know is that ONE of the files in the folder `foo/bar` contains a function `baz`, with no direction on which it is.

> There's 1 dependency management tool though. It's go modules. Unless you live 2 years in the past?

As I said, 99% of my interaction with Go is reading the code that other people wrote. I don't make the decisions about which dependency management tools they use.

> You just right click in VS Code and click "Go to implementations". Or if you use Goland it's right there on the gutter.

I use Emacs, but VS Code still uses the same broken Gopls.

> Reading comments like this really makes me wonder if anyone complaining about Go has actually used it.

Can't really say I disagree, I guess.


> In Go, if I see a call to `import ("foo/bar"); bar.baz()`, all I know is that ONE of the files in the folder `foo/bar` contains a function `baz`, with no direction on which it is.

This is a code smell for poor file organization in your "foo/bar" module - in a well organized project, it should be obvious which file in a module contains a given function[1]. Go doesn't force file=module paradigm (preferring the folder to be the basis), however, it doesn't preclude having one file per module, if that's what you prefer. If you're reading someone else's poorly organized code, you can always use grep.

1. Say, you have an `animal` module, the first place you check for `NewCow()` is `animal/cow.go`


> Go doesn't force file=module paradigm (preferring the folder to be the basis),

Yes, this is what that complaint was about?

> however, it doesn't preclude having one file per module, if that's what you prefer.

> 1. Say, you have an `animal` module, the first place you check for `NewCow()` is `animal/cow.go`

This whole subthread was about reading other people's Go code. Of course your own code is going to look organized to yourself!

> If you're reading someone else's poorly organized code, you can always use grep.

Yeah, ripgrep tends to be what saves me in the end. Still annoying to have to break it out all the time.


> Yes, this is what that complaint was about?...This whole subthread was about reading other people's Go code.

So the complaint, restated is "Go doesn't prevent other people from writing bad code?" Am I getting you right? If so, well, I have nothing to say about that.

edit: I do actually have something to say. I just remembered having to work with an 82,000-line long Perl module file that would defeat any IDE. Fun times. No language can save you from poorly organized projects, whether the modules are file-based or folder-based.


I would say it's closer to "Go doesn't do a really simple thing that nudges people towards writing more readable code, while having basically no tradeoffs".

Considering the far more tedious tradeoffs that Go does force on users in the name of supposed readability (for example: the lack of generics), I'd consider that a pretty big failure.

I don't expect them to be perfect. I do expect them to try.


FYI: Go's lack of generics was not related to readability - the Go team didn't have a solution they liked, so instead of saddling the language with a half-assed solution forever, they waited for a more elegant solution. Also: the design of Go generics was recently approved (as in the past month).

It's no secret that Go is targeted at "programming at large". Go's design favors larger, fewer modules, how those modules are structured is left to teams/individuals. I may be lucky to work with a great team, but I always find the code where I expect to find it. When I'm starting from scratch, I utilize modules, <$VERB>er interfaces, structs and receiver functions: I cannot remember ever running into an ambiguous scenario where I'm unsure where a particular piece of code ought to go.


Like the parent, I read more Go than I write, and I have never seen a single-file-per-module (unless the entire project is one file). And sometimes it's obvious what file a function is from (like in your examples), but a lot of times it isn't. For example is `GiveFoodToCow` in food.go or cow.go? Maybe there are patterns that experienced gophers know, but that adds cognitive load. And would having a 1 file per module paradigm have made go any more complicated?


food.go and cow.go do not belong in the same (sub)module, IMO. That said, each team (and project) have unique sensibilities - consistency and familiarity help here. My gut feeling is that the "belongingness" of feeding a cow is closer to the cow than the food, unless your food.go is full of "GiveFoodToCow(), GiveFoodToChicken(),...GiveFoodToX()" which is gross. With that complexity, you're better of with a Feeder interface and implement Feed() in cow.go (and chicken.go, etc). If you cannot distill the logic to a single interface, you're probably better off with a receiver (or regular) function in cow.go, because having a list of GiveFoodToX() with different args in food.go is the worst possible design you could go with.

> And would having a 1 file per module paradigm have made go any more complicated?

I was being facetious. Under normal circumstances, no one should use one file per module in Go, but if Rubyists are feeling home-sick while writing Go, the option is available to them ;)


> In Rust, if I see a call to `foo::bar::baz()`, I instantly know that `foo/bar.rs` or `foo/bar/mod.rs`

In Rust, you would see `Bar::baz()` and have no clue where `Bar` is defined. The common style is `use something::somewhere::Bar`.


but if you see `Bar::baz` you can just go to the top of the file, and find the `use something::somewhere::Bar`, and then know what file `Bar` is defined in. In go, the import line only tells you what module/folder it is in, and you either need some tool or some intuition on how the module is organized into files to know what file to look for the definition in.


And then in Rust, that file ends up just saying `pub use ...`, and the real code is in a different module.


Let's go with "Go is easy if you use an appropriate IDE"

Emacs is great for many things, but Go was clearly designed with GUI IDE tooling in mind. So if you decide to forfeit that huge benefit, of course it'll be a sub-optimal experience.


> Go was clearly designed with GUI IDE tooling in mind.

Surprisingly not. One of Go's authors commented many times in the early days that Go shouldn't need an IDE, only a text editor. He even commented that syntax highlighting/coloring were unnecessary distractions.


He said:

there was talk early in the project about whether Go needed an IDE to succeed. No one on the team had the right skill set, though, so we did not try to create one. However, we did create core libraries for parsing and printing Go code, which soon enabled high-quality plugins for all manner of editor and IDE, and that was a serendipitous success.

Which I read as they considered IDEs important.


I'll accept that as a reasonable conclusion. Then he also said the following:

https://usesthis.com/interviews/rob.pike/

https://groups.google.com/g/golang-nuts/c/hJHCAaiL0so/m/kG3B...

(and some other stuff about syntax highlighting from around that time (?) I couldn't find.)

So maybe it was a bit of both.


To be fair, we then had to throw all of those "high-quality" plugins in the garbage where they belonged, and go with Microsoft's Language Server architecture instead to really have a usable Go IDE experience in something like Emacs or vim (GoLand from JetBrains is of course much better, and that might be using some of the built-in Go tools?).


> Go was clearly designed with GUI IDE tooling in mind.

I would like to see what happens if you say this to Rob "syntax highlighting is for children" Pike's face.


What makes Emacs not a GUI IDE?


I happily use golang in vim using gopls. I don't see an IDE being a requirement at all.


This is pretty much an IDE though (which is not a bad thing!)


Gopls works great with Emacs.


The first one is an issue, not just for reading but also for writing; you'll find slightly similar ways of doing the same thing, and Murphy's (Real) Law kicks in "If there are two or more ways to do something, and one of those ways can result in a catastrophe, then someone will do it."

The rest of the issues seem to be tooling issues, not an issue with Go itself. There are tools out there that do the right thing (personally I'm old school and use gotags, warts and all), so I'd suggest get better tools that work for you (or write one if you can't find one you like).


> Reading Go means that I keep having to read idioms (as posted in the article) and parse them back into the actual intent, rather than just reading the intent directly. But I can't even trust that, since the nth rewrite of the idiom might have screwed it up subtly.

This. Also, this idea that simple(r) = easy to read just doesn't hold. Is brainfuck easy to read? It's very simple. Even without being so extreme, we could simplify go replacing loops and structured ifs with "if-gotos". Or remove list enumeration as in range(my list) and only allow using integer indexing. Would that make go easier to read or reason about?


> Reading Go means that I keep having to read idioms (as posted in the article) and parse them back into the actual intent,

What? I never have to do this...

> I can't easily navigate to the definition of a function . . . I usually don't have tooling that can jump to definition

What? You command-click on it... ?

> jump-to-definition becomes impossible as soon as interfaces are involved, because structural typing makes it impossible to know what is an intentional implementation of that interface

What? You obviously can't jump-to-definition of an interface, but in what way is this a limitation?


I think c# is an unfair comparison in this instance.

You are right, c# does too much. But Go makes it far more work to do basic things - list comprehension for example (what is everyones obsession with for loops?). An opinated language with one way to do things sounds great, but not at the expense of basic niceities you get in other languages.


For what it's worth, I'll take for loops over list comprehensions any day of the week. I just find it a lot easier to read.


So you are claiming this

  arr2 := []int{}
  for i := range arr1 {
    if arr1[i] > 9 {
      arr2 = append(arr2, arr1[i] * 2)
    }
  }
Is more readable than these?

  var arr2 = arr1.Where(x => x > 9).Select(x => x*2);
  var arr2 = from x in arr1 where x > 9 select x*2;
  arr2 = [x * 2 for x in arr1 where x > 9]
  (setf arr2 (loop for x across arr1 when (> x 9) collect (* x 2)))
I honestly can't imagine by what measure any of the latter ones could be harder to understand the former. And as the complexity of the expression increases, I only see the advantage increasing typically (though often it pays to split it into multiple comprehensions, just like a loop that does too much).


Personally? Yes (except for the first example you gave, which I don’t classify as “list comprehension”).

It’s not inherent complexity though, just personal preference. I grew up writing for loops so they’re second nature to me and I grok them instantly in a single pass, whereas some list comprehensions require a re-read (particularly if they’re nested).

Different strokes, that’s all.


Fair enough!


Agreed! Check out pythons move with walrus operators inside list comprehensions - I think u can do that now - starts to be line noise


It's exactly this sort of comment that gives go lovers a bad name.


"A simple language is easier to read." "A simple language builds simple software".

It's become a cliche to bring up brainfuck here, but it really is a direct refutation of these ideas, being a maximally simple language that produces maximally complicated code.


Go _is_ easy to read. I can read almost any go codebase, even giant ones, and see what's going on pretty quickly. Same with C - the Linux kernel is surprisingly easy to understand once you know the basic data structures it uses. There is a lot of benefit in using simple/limited languages from a readability point of view.


Maybe for small pieces of code, but getting the overall higher level feature goals of code in go is hell (for me). The lack of function overloading to generalize intent for generic work is abysmal and outdated. It pains me to no end that their built in functions are overloaded, but I can't do it in my own go code.

Generally speaking if you're modestly familiar with a language it's rare that it's difficult to read small pieces of code. The hard part of programming on a team is writing code such that high level intent is quickly apparent with APIs that support that intent and make it easy to understand where edge-case handling is happening vs direct feature intent


Reading the kernel is relatively easy since some people worked hard on a structure, which is relatively clean.

There are many macro C code bases with tons of function pointer fun (for example when doing OOP) where you have a hard time to find things, till you spent considerable time learning the choices made in that code base.


Yeah true, macro heavy code can be really difficult, as you can make your own metalanguage with macros. The ruby codebase is a lot like that, but I still find it pretty easy to read. But I can imagine someone going crazy with macros could really make it hard, but that's not the norm in my experience.


The data structures in C require thorough documentation to have any chance to keep your foots. And keeping the preprocessor wizards in check. But I agree that Linux authors have done a consistently good job.

In fact, a kernel is even easier to read than other programs whose authors worked as hard, because having no external libraries at all also helps.


simple != easy.


Go is easy to read


> No hidden behaviours, no overly complex syntax, nothing like that.

Arrays vs. slices and standard functions dealing with slices are full of weird behaviour, unnecessarily complex syntax and obscure features. Like, why are there even arrays at all? Which slice functions return a copy and which ones don't? Why is append() so braindamaged to make a copy sometimes? Why do I need make() as often as I do when mostly the language knows all make() would do?

Go still has lots of footguns, even right at the trivial basics.


"Like, why are there even arrays at all?"

Because while Go is garbage collected, it also provides modest control over memory layout. Arrays are allocated chunks of contiguous memory for holding a fixed number of things. Slices are something that sit on top of that and let you not worry too much about that.

You almost never want arrays yourself, but they are a legitimate use case that the language has to provide because you can't create them within the language itself. But the right answer for most programmers is to ignore "arrays" in Go entirely.

"Why is append() so braindamaged to make a copy sometimes?"

(Tone: Straight, not snark.) If you understand what slices are and how they are related to arrays, it becomes clear that "append" would be braindamaged if it didn't sometimes make copies. Go is simple, sure, but it was never a goal of Go to try to make it so you don't have to understand how it works to use it properly.

It is a fair point that quite a few tutorials don't make it clear what that relationship is. Most of them try, but probably don't do a good enough job. I think more of them should show what a slice is on the inside [1]; it tends to make it clear in a way that a whole lot of prose can't. "Show my your code and I will remain confused; show me your data structures and I won't need to see your code; it'll be obvious."

[1]: https://golang.org/pkg/reflect/#SliceHeader


Of course Go has hidden behaviors and overly complex syntax, like any programming language before or after.

For example:

  xrefs := []*struct{field1 int}{}
  for i := range longArrayName {
    longArrayName[i].field = value
    xrefs = append(xrefs, &longArrayName[i])
  }
Perfectly fine code. Now after a simple refactor:

  xrefs := []*struct{field1 int}{}
  for _,x := range longArrayName {
    x.field = value
    xrefs = append(xrefs, &x)
  }
Much better looking! But also completely wrong now, unfortunately. `defer` is also likely to cause similar problems with refactoring, given its scope-breaking function border.

And Go also has several ways of doing most things. Sure, C# has more, but as long as there is more than one the problems are similar.

And I'm sure in time Go will develop more ways of doing things, because the advantage of having a clean way to put your intent into code usually trumps the disadvantage of having to learn another abstraction. This is still part of Go's philosophy, even though Go's designers have valued it slightly less than others. If it weren't, they wouldn't have added Channels to the language, as they are trivial to implement using mutexes, which are strictly more powerful.


I don't think this is a good example of proving the point that Go is difficult to read. Isn't it expected to internalize the semantics of basic language primitives when learning a new language, or does people just jump in guessing what different language constructs do? IMHO you learn this the first week when reading up on the language features.

range returns index and elements by value. The last example does what you asks of it, it's like complaining something is not returned by reference when it's not. Your mistake. Perhaps some linter could give warnings for it.


I think this is actually a very good example of the inverse relationship between logical complexity and language complexity.

A language that has implicit copy/move semantics is easier to write (since it's less constrained), and more difficult to read (since in order to understand the code, one needs to know the rules).

A language that has explicit copy/move semantics, is more difficult to write (since the rules will need to be adhered to), but easier to read (because the constraints are explicit).

Although I don't program in Golang, another example that pops into my mind is that slices may refer to old versions of an array. This makes working with arrays easier when writing (as in "typing"), but more difficult when reading (as in understanding/designing), because one needs to track an array references (slices). (correct me if I'm wrong on this).

In this perspective, I do think that a language that is simpler to write can be more difficult to read, and this is one (two) case where this principle applies.

(note that I don't imply with that one philosophy is inherently better than the other)

EDIT: added slices case.


I don't think you can truly internalize the varying semantics of which operations return lvalues and which return rvalues. At least, I haven't yet been able to in ~2 years of Go programming, and it's a constant source of bugs when it comes up.

The lack of any kind of syntactic difference between vastly different semantic operations is at the very least a major impediment to readability. After all, lvalues vs rvalues are one of the biggest complexities of C's semantics, and they have been transplanted as-is into Go.

As a much more minor gripe, I'd also argue that the choice of making the iteration variable a copy of the array/slice value instead of a reference to it is the least useful choice. I expect it has been done because of the choice to make map access be an rvalue unlike array access which is an lvalue, which in turn would have given different semantics to slice iteration vs map iteration. Why they chose to have different semantics for map access vs array access, but to have the same semantics for map iteration vs array iteration is also a question I have no answer to.


> and they have been transplanted as-is into Go

Hum... I don't think anything can return an lvalue in C.

Your first paragraph is not a huge concern when programming in C, declarations create lvalues, and that's it. I imagine you are thinking about C++, and yes, it's a constant source of problems there... So, Go made the concept much more complex than on the source.


I think Go has exactly C's semantics here. The following is valid syntax with the same semantics in both:

  array[i] = 9
  *pointer = 9
  array[i].fieldName = 9
  (*pointer).fieldName = 9
  structValue.fieldName = 9  
  pointer->fieldName = 9 // Go equivalent: pointer.fieldName = 9

  // you can also create a pointer to any of these lvalues
  // in either language with &
Go has some additional syntax in map[key], but that behaves more strangely (it's an lvalue in that you can assign to it - map[key] = value - but you can't create a pointer to it - &(map[key]) is invalid syntax).


> I don't think anything can return an lvalue in C.

Btw, I was curious to check , here is an example of a function returning an lvalue:

  struct test {
    int a;
  } global;

  struct test* foo() {
    return &global;
  }

  int main (int argc, char**argv) {
    printf("%d", global.a); //wil print 0

    foo()->a = 9;
    //or (*(foo())).a = 9;

    printf("%d", global.a); //will print 9
  }


What exactly is the difference here? Is the `x` variable being updated in the second sample so that the stored references all point to the same item?


Yes, that code is equivalent to

  xrefs := []*struct{field1 int}{}
  var x struct{field1 int}
  for i := range longArrayName {
    x = longArrayName[i] //overwrite x with the copy
    x.field = value //modify the copy in x
    xrefs = append(xrefs, &x) //&x has the same value regardless of i
  }
The desired refactoring would have been to this:

  xrefs := []*struct{field1 int}{}
  for i := range longArrayName {
    x := &longArrayName[i] //this also declares x to be a *struct{field1 int}
    x.field = value
    xrefs = append(xrefs, x)
  }


> Like we now have classes, structs and records.

IMO records are a great addition, especially when it comes to point-in-time/immutable data. They also provide an improvement to developer efficiency, as basic data classes can be represented with a single line record definition.

> We now have async/await but we also have channels. Some people are going to write code using channels, me others will use async.

I'm not really sure what you're talking about here. C# has no concept of a channel. If you referring to the "System.Threading.Channels" package, it exists for those in the community that would benefit from it, and still uses familiar async/await syntax. It's also a very niche package that is unlikely to have significant adoption, so there's no real concern of the pattern "segmenting" the community.


> It's just simple and easy to reason about.

These aren't the same thing. Brainfuck is "simple" in the same sense - no hidden behaviors or complex syntax - but it's virtually impossible to tell at a glance what a Brainfuck program does, or say with certainty how it will react to different inputs. Complexity has to live somewhere, and the more restrictive the language, the more complexity is moved to the program structure. Conversely, every special-purpose construct in a language is a place where you don't have to rely on reasoning about an unboundedly complex Turing-complete system, and can instead go and check the documentation for what it does.


I don't entirely disagree but I can't help noticing, you mentioned two ways to do concurrency in C# and two ways to do it in Go.


Actually Rob Pike said that Go was supposed to be "easy to understand and adopt": https://www.youtube.com/watch?v=uwajp0g-bY4


> But otherwise I am super happy with writing Go code. All my pet peeves of something like modern Java or C# are gone.

I'm always confused by this. Why do people consistency cherry pick specific languages to compare then when those languages offer fundamentally different paradigms? I understand comparing C/C++ vs Go, but isn't obvious that C# is going to be fundamentally different than Go?


C# and Java are much closer to Go than C++. C is also closer from another perspective (simplistic syntax).

C++ is about as far from Go as you can imagine a language. Maybe only Prolog would be a worse comparison point.

C++ is designed with one goal in mind: no special compiler built-ins. If it is possible to do it at all, it can be done in a 3rd party library. C++'s design philosophy is Library first. Bjarne Stroustroup explicitly advocates this: don't write special case code. Write a library that solves the problem, then use that library to achieve your special case.

Go doesn't think writing libraries is a useful endeavor for most programmers: they should be writing application code, and let their betters design the tools they need, build them into the compiler, and stop wasting time designing your own abstractions.


> C++ is about as far from Go as you can imagine a language. Maybe only Prolog would be a worse comparison point.

https://commandcenter.blogspot.com/2012/06/less-is-exponenti...

/headscratch

EDIT: I think you're misinterpreting syntax of the language with use case. e.g. low level vs high level programming language


I'm not talking about either syntax nor use case, but about language philosophy, the approach to problem solving embodied in the language - the "paradigm" as you called it. As that post points out, even inside Google, the C++ committee members had a vastly different vision than Rob Pike on what makes a language good. C++11 is essentially universally loved in the C++ community as a giant step forward, as are most subsequent revisions. Rob apparently hates them.

Even the article shows that Go and C++ are fundamentally at odds philosophically, in terms of their paradigm:

> Jokes aside, I think it's because Go and C++ are profoundly different philosophically.

> C++ is about having it all there at your fingertips. I found this quote on a C++11 FAQ:

> The range of abstractions that C++ can express elegantly, flexibly, and at zero costs compared to hand-crafted specialized code has greatly increased.

> That way of thinking just isn't the way Go operates. Zero cost isn't a goal, at least not zero CPU cost. Go's claim is that minimizing programmer effort is a more important consideration.

If you want to compare them on use cases, Go is far too slow and memory hungry to be used in most domains where C++ is actually necessary. That is, for any C++ program that could be re-written in Go, you could also re-write it in Java or C# or Haskell, bar a few problems around startup times and binary sizes.

And that is what has been seen, again as the post points out: C++ programmers are not switching to Go. Ruby and Python and (Java and C#) programmers are.

Also, the designers of C++ have never made any effort to think about how easy the language is to learn, unlike the designers of Go, Java and C#. In fact, all of these languages share a common heritage: they are all designed as answers to C++, in slightly different ways.


> That is, for any C++ program that could be re-written in Go, you could also re-write it in Java or C# or Haskell, bar a few problems around startup times and binary sizes.

Nearly any program can be written any language. I can write in functional programming using Ruby, but why on earth would I do that when Haskell is "out of the box" useful for an application where functional programming makes the most sense.

> Ruby and Python and (Java and C#) programmers are.

Dynamic, object orientated based programmers are switching to Go when the use case suits. My understanding is that the syntax and philosophies are much more aligned to people of those backgrounds (Erlang -> Elixir is a great example of this) and gives them access to a lower-level language without having to learn many of the common more "computer sciency" things that are involved with those languages (C++/C).


> Nearly any program can be written any language. I can write in functional programming using Ruby, but why on earth would I do that when Haskell is "out of the box" useful for an application where functional programming makes the most sense.

That's only true in theory. In practice, you can't write a program that has soft realtime constraints for example (e.g. a game engine) in Go or Java, you have to write it in C++ or C or maybe Rust. This is true in general when you need utmost performance: while in general it's easier to write the same functionality in Go or Haskell or Java than in C++, the opposite is true when you need to extract as much performance as possible: it becomes easier to work in C or C++ or Rust than in Go or Haskell or Java (of course, this is a generalization; there are exceptions, though the rule holds pretty well overall).

> My understanding is that the syntax and philosophies are much more aligned to people of those backgrounds (Erlang -> Elixir is a great example of this) and gives them access to a lower-level language without having to learn many of the common more "computer sciency" things that are involved with those languages (C++/C).

Yes, because Go is is much closer to this paradigm than to the the paradigm of (Modern) C++. The only programming languages whose vision is similar to C++'s in my opinion are Rust and Common Lisp (even though of course Common Lisp is applicable to different hardware constraints, it being garbage collected and dynamic, it still has a very similar philosophy at its core).


Yes it's obvious, but it doesn't prevent OP from liking go more than c#/java.

Furthermore, its very likely that the languages people 'cherry pick' are actually languages they use everyday. So they just compare 'new everyday language' to 'previous everyday languages', and just tell you whether they're happier or not when they write code. The point is not really about the language theoretical properties and merits, but how people experience the language in real life.


>This is what having a huge toolbox causes. Some people are going to write code using channels, me others will use async.

I don't know if this is a problem. Redundancy is good. If simple languages ruled, we would all be speaking Esperanto.

>Some people will use features that others don't care about. I end up having this argument with product managers who insist that because most people only use 60% of the functionality of a product, we don't need to implement the remaining 40%. You need more than one way of doing the same thing.


On the contrary, every instance where there is more than one way to do something is a failure of the language. It's OK, all languages have failures, but the ideal is pretty clearly a set of precisely orthogonal features, where there is neither repetition nor exception.


I am pretty sure I have read something similar to "Go is easy" a multitude of times.


We solve that with communication to use a to the team well known subset of the language and internal trainings if we see potential helpful features.


But for most programmers, easy outweighs simple in the hierarchy of values.


Which is a problem. (Though, i will say i think go is neither easy nor simple, but i believe that is an unpopular opinion here at HN)


Simple > easy should be common sense for anyone who ever worked in a team. Code is written once and read many times (and usually not by the author).


As someone without a ton of experience with Go, a good amount the Go code I have encountered "in the wild", has actually been more difficult to read and understand than code in more complicated languages because I have to read through and understand all of these patterns. Hopefully the addition of generics will help with that. But IMO the simplicity of go actually hinders readability.


My personal experience is very different. Of course I have seen bad Go code in the ~5 years I do Go development professionally.

But when compared to previous monstrosities in C++ or Java with exceptions cluttered everywhere, deep inheritance trees that are absolutely useless..

then Go code is an absolute breeze to read and work with. The one thing I see frequently and dislike a lot in Go code bases is the frequent usage of interfaces for single structs just for the sake of mocking for unit tests.

Often I see cases where you could just strip the whole layer of crap and unit test the real functions themselves. But nobody seems to think about that. It seems that this "write interfaces for everything and then mock/test this" pattern is dominating currently.


> But when compared to previous monstrosities in C++ or Java

Perhaps part of it is the languages we are comparing to? I'm comparing to languages such as rust and scala, and to some extent python and ruby.

> with exceptions cluttered everywhere

I prefer errors to be part of the return value to exceptions, but I also find repeating

    if err != nil {
        return nil, err
    }
for almost every line of significant code in go functions pretty distracting. I much prefer rust's `?` operator. Or haskell's do notation (or scala's roughly equivalent for/yield).

> deep inheritance trees that are absolutely useless.

uh, you can have deep inheritance trees in Go, and not have them in Java or C++, I'm not sure what your point is.


> Perhaps part of it is the languages we are comparing to? I'm comparing to languages such as rust and scala, and to some extent python and ruby.

I think you should compare to languages that have a similar purpose / area of usage. In my experience that is C++ and mostly Java. I wouldn't dare compare dynamically typed and interpreted languages with Go.. what's the point? I don't have much experience with Rust so I cannot compare it and additionally it's rarely used in companies. Scala I just don't like personally. For my taste it just "is too much of everything".

> ... err != nil ...

In the beginning I was thinking the same. Over time I got used to it. When I write it I use a snippet. And clearly reading the flow of the error has been beneficial to me a lot of times. Yes it is verbose.

> uh, you can have deep inheritance trees in Go, and not have them in Java or C++, I'm not sure what your point is.

I am sure you know that there is no inheritance in Go so I am not totally sure what you are getting at. My point is that I think OOP by composition is a lot clearer than by inheritance. Also composition is not overused in Go as say inheritance is overused in Java.


> I think you should compare to languages that have a similar purpose / area of usage. In my experience that is C++ and mostly Java.

The languages I listed first were rust and scala, which serve very similar purposes to c++ and java. In fact, rust is closer to c++ (no GC, more direct memory control) and scala is closer to java (runs on JVM) than go is to either.

> Over time I got used to it. When I write it I use a snippet.

Which is my point. You have to get used to things like this, which probably adds about as much cognitive load as having something in the language to reduce this noise (although I think this is an known problem in go and may be improved in a future version).

> I am sure you know that there is no inheritance in Go

Fine. Replace "inheritance" with struct embedding and/or interface hierarchies, and you can get a similar effect. My point is you can have overly abstracted designs in either language.


> The languages I listed first were rust and scala, which serve very similar purposes to c++ and java. In fact, rust is closer to c++ (no GC, more direct memory control) and scala is closer to java (runs on JVM) than go is to either.

The overlap in purpose and usage for Go and Java is gigantic. Also I don't understand why it matters whether Scala is closer to Java or whether Rust is closer to C++. We were comparing Go and X right?

This whole argument tree is a bit nonsensical.. I compared Go projects to Java and C++ projects which I have worked on. All of the mentioned languages are very common in companies these days and are used for similar topics. Why bring other languages in to this?

> Which is my point. You have to get used to things like this, which probably adds about as much cognitive load as having something in the language to reduce this noise (although I think this is an known problem in go and may be improved in a future version).

You always have to get used to some quirks in any language out there. It adds a few lines writing the code but reading it is way easier IMHO and not cognitive load. Opinions may of course vary on this.

> Fine. Replace "inheritance" with struct embedding and/or interface hierarchies, and you can get a similar effect. My point is you can have overly abstracted designs in either language.

As I already wrote, embedding is rarely used and was not a problem ever in my experience and yes interface spamming is a problem.

However Java e.g. has these abstraction complexities already baked into the standard library and encourages the overuse of abstractions IMO.


> I am sure you know that there is no inheritance in Go

https://golangdocs.com/inheritance-in-golang


This tries really hard to look like an official resource, but it seems to be run by some random IT consultancy.

What they call inheritance is a narrow syntactic sugar intended to forward method calls to a member without having to write something like

  func (x Outer) Frobnicate(val int) {
    x.inner.Frobnicate(val)
  }
by hand. It's arguably not inheritance because Liskov substitution is not permitted:

  type Outer struct {
    Inner
  }

  func process(i Inner) { ... }

  var x Outer
  process(x)       // ERROR: type mismatch
  process(x.Inner) // OK


This is a bad article; struct embedding is not inheritance, and mistaking it for such is a classic case of "Java/Python programmer tries to use Go and shoehorns concepts from those languages in Go" that the post mentions.

This entire website seems pretty ... meh. It's like the W3Schools of Go.


Could you share an example?


Not the poster you're responding to, but practically every time someone wants to write code that mimics `map` or `filter` and gang is a 5-7 line function (at least). Something that would have been a 1 liner in languages like Java or C#. It gets tiring and distracting very quickly to jump around verbose implementations which amount to nothing more than standard streaming functions.

Another example is the lack of annotations for validations. In Java or C#, you'd annotate the function argument with something like `@Validated`, and it's taken care of. In golang, calling the validation code would have to be done manually each time that's another 3-4 lines (including error handling).

Yet another example is that golang lacks a construct similar to C#'s `Task`. You can't execute a function that returns a value asynchronously (if you care about that value) without creating a channel and passing it.

golang also lacks pattern matching and discriminated unions (aka algebraic data types). Java and C# are getting both (both have pattern matching, and ADTs are in the works as far as I'm aware).


Have to agree with you. I wished for sum types & pattern matching more than for generics.. but who knows what the future brings.


this is why I have moved to Rust. I worked in Go a number of years ago, but after becoming proficient in Scala, I've decided sum types and pattern matching are where the sweet spot is. The JVM has its own issues though and I'd like to have a performance-conscious language in my toolbelt that doesn't sacrifice expressivity. Hence: Rust.


RemindMe! 10 Years "Are we there yet?"


You got me excited about Java having pattern matching, but all I could find was a proposal.



Thats just for instanceof they're working on growing the capabilities in another JEP. Switch expressions are EXTREMELY powerful though.


I think anyone can agree with the author that Go definitely lacks conveniences you're used to in other modern languages.

I just think we also tend to dramatize how much that matters.

I also think Go's benefit really is that it's simple and that you have very few tools that let you do anything other than focusing on solving your problem, and like the author I do think goroutines are the exception to that.

Where I work, we don't even use them. We use plain ol' mutexes instead.

When coming from higher level languages, Go does feel frustrating. In Javascript, you're writing `const [a, b, c] = await Promise.all([x(), y(), z()])`. In Go, you're writing 40 lines of WaitGroup code. It's easy to go ughhh.

But I think a nice way to appreciate Go's conservative middleground is to go back to writing some C/C++ code for a while, like some Arduino projects. Coming from that direction, Go feels like a nice incremental improvement that doesn't try to do too much (with perhaps the exception of goroutines).

Go's performance is also particularly stand-out which makes up for many of its convenience shortcomings. It's fast enough that I've written some code in Go where I would have written C not long ago. And writing C involves quite a bit more concessions than what Go gives you, so in comparison, Go kinda spoils you.

Go has plenty of annoyances too though. Not having any dev vs prod build distinction is annoying. Giving maps a runtime penalty of random iteration in order just to punish devs who don't read docs is annoying. It's annoying to have crappy implementations of things like html templating in the stdlib which steal thunder from better 3rd party libs. Not being able to shadow vars is annoying. `val, err :=` vs `val, err =` is annoying when it swivels on whether `err` has been assigned yet or not, a footgun related to the inability to shadow vars. etc. etc.

But it's too easy to overdramatize things.


> I also think Go's benefit really is that it's simple and that you have very few tools that let you do anything other than focusing on solving your problem

I feel just the opposite: Go has very few tools, which forces you to often have to solve language games instead of focusing on your problem.

You want to remove an element from a list? Instead of writing that, you need to iterate through the list and check if the element at position i has the properties you want, and if it does, copy the list from i+1 (if it wasn't the last!) over the original list. This is NOT the problem you were trying to solve.

You want to store a map from some struct to another? Instead of writing that, you have to create a key struct that is comparable for equality from your original struct, which probably involves string concatenation and great care to avoid accidentally making unequal structs have equal keys; and then two maps, one from keys to the key struct and another from keys to the value struct, and of course client code also has to know about this explicitly.

You want to pass around some large structs, or iterate over slices of them? Well, you'd better start using pointers, and start being careful about what is a copy and what isn't, otherwise you'll pay a steep performance price, without any syntactic indication.

You want to work with sets of values? You'll have to store them as map keys, perhaps doing all of the black voodoo described above. And of course, you can't do a reunion of two sets, you have to iterate through the keys of the first map and the second map and add them to a third map.

All of these things are annoying when you are writing them, and even more of a problem when you are reading the code, as you have to understand the intent from the messy internals that Go forces you to expose.


I frequently describe Go as "C, with memory safety and GC". I don't run into many that disagree.

Go is basically how Java was pre-generics. People forget that early Java was much like Go in some respects. Everything casted to object. Not really sure what the object is unless you wrote the code. Lack of common patterns and a rich set of libraries (Guava, Apache Commons).

Early Java was supposed to be safe C++. Converting C++ to Java is still trivial unless there's pointer craziness. Generally copy paste the C++ and change naming conventions.

I hate working on unfamiliar Go projects. Copy pasted code everywhere. And everyone builds their own abstractions because there aren't enough.

I don't understand Go's appeal. To me it feels clunky and stuck in the 90's. Designed by C programmers for C programmers.

Like C it has no generics, bad packaging, and is a pain in the ass to cross compile. Its only killer feature, IMO, is it's fiber threading model and that's quickly being copied by other languages, even ancient Java.


> Converting C++ to Java is still trivial unless there's pointer craziness. Generally copy paste the C++ and change naming conventions.

What? You've obviously never seen actual C++ code to make such a ridiculous statement.


1996 C++ (when Java appeared) being used in MFC, OWL, Powerplant and CSet++, alongside Smalltalk in the gang of four book, definitely.


Google C++ looks a lot like Java, maybe that's where they are coming from. But it's still quite a stretch!


As they say, "you can write FORTRAN in any language".


In fact, Go is not even memory safe despite the GC. It has data races, which are exploitable.

  - https://blog.stalkr.net/2015/04/golang-data-races-to-break-memory-safety.html
  - https://golang.org/doc/articles/race_detector#Introduction


> I hate working on unfamiliar Go projects. Copy pasted code everywhere. And everyone builds their own abstractions because there aren't enough.

Arguably, if the project requires lots of abstractions and copy-pasted code, then the project shouldn't be written in Go. Just because you can, technically speaking, write object-oriented code in Go does not mean that it is ergonomic to do so or a recommended use-case for the language. Projects that benefit from object-oriented design should stick to languages with first-class support for it.


I don't disagree with your conclusion but I don't see it following from the quoted premise. What was your thought chain to get from the quoted text to OO stuff?


There's a difference between using a struct to represent concepts like program configuration or grouped function parameters, and shoehorning OO-native design concepts like the decorator pattern that need inheritance into a language that doesn't support dynamic dispatch. Much of the bad Go code I see comes, yes, from overusing abstractions and lots of copy-pasting, but most of that comes from trying to fit square pegs like OO design into Go's round hole.


Still don't get the connection in the way you do. Maybe I'm so used to OO stuff that I just gloss over it.

> a language that doesn't support dynamic dispatch.

Go's interfaces do support this enough that I consider it worth at least mentioning.

https://en.wikipedia.org/wiki/Dynamic_dispatch#Go_and_Rust_i...


> Like C it has no generics, bad packaging, and is a pain in the ass to cross compile.

Go modules are pretty good packaging IMHO. What don't you like about them?

As for cross-compiling, are you joking? It's very easy and i literally cannot imagine it being easier. What's painful about it?


I would say Go has very poor packaging, but this is when comparing it with Java (maven), Rust etc. It's definitely much better than C.

The biggest reason why Go modules are a very poor packaging solution is the horrendous stupidity of tying them to source control. This makes it difficult to develop multiple modules in the same repo, it requires your source control system to know Go, it makes internal refactors a problem for all consumers of your code (e.g. if you are moving from github to gitlab, all code consuming your package will have to change their imports).

The versioning ideas of their module system, particularly around the v2+, have made it such a pain that there still isn't a single popular package that uses the proposed solution so far, not even Google ones like the Go protobuf bindings.


> The biggest reason why Go modules are a very poor packaging solution is the horrendous stupidity of tying them to source control

I think it's actually a great idea. It uses native things you use anyway, avoids an unnecessary third party ( a package repository like pypi), and adds in extra security ( remember those hijacked pypi credentials and pypi-only malicious packages that weren't in source control, so hard to verify?).

> This makes it difficult to develop multiple modules in the same repo

How so? You can justs use different folders and packages.

> requires your source control system to know Go

Not really.

> it makes internal refactors a problem for all consumers of your code

That's always the case - regardless of your package repository, if you change it, you have to update the new references everywhere. You can use go.mod to replace a reference ( e.g. use gitlab.com/me/library instead of github.com/legacy/library) without updating all the code. And then there's the goproxy protocol, which makes all of this optional.

I agree on v2 modules, that is very weird.


> I think it's actually a great idea. It uses native things you use anyway, avoids an unnecessary third party ( a package repository like pypi), and adds in extra security ( remember those hijacked pypi credentials and pypi-only malicious packages that weren't in source control, so hard to verify?).

Not really. You may be using Git internally, but if you want to consume a module that is developed in Perforce or Subversion, you must now also install P4 and/or Subversion and configure them so that they have access to the remote repo.

It doesn't avoid a third party, it actually makes you reliant on numerous third parties, the source hosting sites for every module you use (Github is a 3rd party between me and the developers of the Go language bindings for protobuf, for example).

Also, that is false extra security, as nothing prevents the code itself from being malicious or embedding malicious files, and this can be arranged to be served only when request through Go mod (based on headers, on the HTTPS requests that only Go mod makes etc.). Especially given that go supports shell execution in source files (//go:generate rm -rf ~/).

> How so? You can justs use different folders and packages.

If you have multiple modules in the same repo, you have to tag every "release" with multiple tags, one for each module (e.g. mod1/v1.0.123, mod2/v1.0.131) and you'll quickly run into problems (if you don't sync releases, you'll have absurd combinations of module versions whenever you sync your repo, so you won't be able to rely on `replace` clauses for example).

> > requires your source control system to know Go

> Not really.

It does, if you have a dependency like "github.com/user/proj", go mod will first make an HTTPS request to this URL and expect some specific responses telling it how to get the code.

> That's always the case - regardless of your package repository, if you change it, you have to update the new references everywhere.

If I publish my modules to Maven, I can switch from Perforce to Git without any change whatsoever for anyone consuming the Maven package. I can split or merge my repos internally in any way, but as long as I produce the same Maven package, my consumers don't need to care.

Replace directives and goproxy are bandaids for a silly problem to have.


> It's very easy and i literally cannot imagine it being easier.

`go build -target x.y` could be considered fractionally easier than `GOOS=x GOARCH=y go build`? But yeah, this is a nonsense claim by GP.


I don't think this article really overdramatizes things; if I had written "zomg Go is terrible look what I need to write to delete an item from a list!" – a sentiment I've seen expressed a few times over the years – then, yeah, sure.

But that's not really what the article says. As it mentions, I don't think these are insurmountable issues, and I still like and use Go: it's pretty much my go-to language these days. Overall, I think it did a lot of things right. But that doesn't mean it's perfect or can't be improved.


The curse of having landed on the HN front page. You write a little blog post with some random thoughts, somebody found it insightful and posts it to HN and if more people like it, it ends up on the front page. That exposure tends to attach a wide-ranging seriousness to this kind of blog post which may dramatically overinflate the point that it originally intended to make.


I used "dramatize" to describe the blog post (in such that the blog post was written) and then "overdramatize" at the end in reference to my own infinite list of annoyances with Go and the normal course of discussion on HN about proglangs.

The blog post does admit "Are these insurmountable problems? No." Learning it in 5 minutes certainly is a stretch as you point out, but I'd still say you can learn it in a weekend to a greater degree than any other language I use.

Though you'll notice my comment after the first sentences is a way to share my own thoughts on Go, an opportunity I'm never going to neglect.


I agree with that. Go can be frustrating if you expect the level of expressiveness of Python, Java, or Rust. But if you regard it as a better C, you can appreciate its ergonomic upgrades compared to C. What is unfortunate is that Go could be much more than just a better C, considering Google's potential and the decades of programming language research that were present at the inception of Go. By trying to simplify things too hard, and ignoring precedents, Go missed a huge opportunity cost. And it even failed to fulfill its original intent: it cannot be used in place of C, unlike Rust.


Agreed. I would love for Go to have stolen more pages from Rust's crusade for zero-cost abstractions and high level conveniences. I can appreciate ruthless conservatism in some places, but Go isn't actually ruthlessly conservative. It has quite a lot of weird things in its stdlib.


Could you please expand on why is Go not a replacement for Rust? Is that because of the lack of manual memory management?


What I wanted to say was that Go was not a replacement for C, compared to Rust which could have similar performance characteristics as C or C++. The lack of manual memory management is one thing, but there are cultural reasons too. For example, in Go it is very idiomatic to return an error value as a string, which cannot be even imagined of in the C, C++, or Rust ecosystem. That's because as Go is targetted toward backend-level programming nowadays (which was different at the beginning of Go, when Go was marketted as a systems language), some minor overhead is acceptable if the convenience it brings is worth it. Go is full of such compromises, which isn't exactly a bad thing, but unsuitable to the lowest level programming which begs for an extremist position regarding performance issues. The mentality of Go is good enough for conventional programming, but bad for systems programming. And I think such cultural differences stem from Go's simplicity first mindset.


Yes and more:

1. Lack of "manual memory management" can be taken as GC (though Rust is neither GC'd nor manual memory managed, it's declaratively memory managed) and a GC is unacceptably expensive for systems programming. Furthermore, until recently GC was also unacceptably bad for FFI. However stable Rust and Haskell with experimental features can nowadays GC across FFI without issue, but I'm still unaware of any other language that can.

2. Lack of "manual memory management" also means lacking direct memory access, etc. This is a problem in systems programming, though you do not need it for all systems programming.

3. Then there's green threads. Early on Rust had them, but they were removed when it was discovered it's theoretically impossible to implement them with acceptably low overhead for a C or even a serious C++ competitor. What's worse, you have to pay the overhead even if you don't use the feature. Nowadays, you can opt in by importing an ecosystem library that provides green threads but mostly people just use futures rather than green threads.

4. Interfaces: Even tinygo's documentation advises avoiding interfaces and tinygo isn't even a C/C++/Rust competitor, it's a micropython competitor.

5. ...


The obvious case is embedded programming. You can run Go via a custom compiler for microcontrollers: https://tinygo.org/.

But to their point: why incur that sort of overhead for a language that doesn't necessarily give you enough conveniences to be worth it? Or rather, Rust gives you more conveniences with very little runtime overhead.

It's not exactly damning to be unable to compete with C on a microcontroller with 2K RAM, they were just pointing out the end of my comparison with C/C++.


Go's lack of generics and metaprogramming can be extremely frustrating when you're working with services that copy data around from one struct to another, which ends up being a lot of them. You're left with reflection, code generation, or doing it field by field.

It'd be nice to have syntactic sugar for splatting structs together, or a library that let you do it in a typesafe and efficient way.

EDIT: I will defend the random map order, though. Golang maps were only predictable up until a set number of items, which caused hard-to-detect bugs where your small test case would be in order but your large real-life data would be out of order.


I certainly agree. I would love for Go to get to a place where I never have to copy and paste code again. Generics will get us part way there.

The next item on my wishlist are algebraic data types. In complex projects I've opted for Rust over Go just because of the abstraction power of being able to represent things like:

    Dest = Register (A | B | C)
    Src = Register (A | B | C) | Immediate value
    OpCode = Nop | Add(Dest, Src) | ...
In Go, I'm so tired of `interface{}`.

Edit: To respond to your edit about map iter order, that's related to my complaint of Go's lack of dev vs release build. Randomizing map iter order in dev builds is acceptable to me. Go makes you pay for that in prod.


I don't understand the jump from the lack of algebraic types, to using interface{}.

You can spell out all your types into explicit structs (where you can choose between a simple implementation with a little worse performance or to make the implementation a little more complex) and pass them around.

There are many complex code bases (kernels & databases come to mind) which use C (not C++), and they don't resort to passing void* around the "business logic" parts.

The idea that for a complex project you would choose a different language with so many different characteristics due to a minor detail about the type system (this is not exactly Haskell vs JS...). This kind of decision making would not pass in any reasonable organization...


> (this is not exactly Haskell vs JS...

No, it's more like Haskell vs C, considering the expressiveness of Rust’s vs Go’s type system.


I don't know Go. How can you emulate sum types on it? And how do you pattern match them?


It depends on what you mean exactly by "emulating sum types" and "pattern matching them".

An example is rolling your own discriminated union:

  type Avariant struct {
    a int
  }
  type Bvariant struct {
    b string
  }
  type SumChoice int
  const (
    SumA SumChoice = 0
    SumB SumChoice = 1
  )
  type Sum struct {
    type SumChoice
    A *Avariant
    B *Bvariant
  }

  sumA := Sum{choice: SumA, A: &{a: 7} }
  sumB := Sum{choice: SumB, B: &{b: "abc"} }

  func foo (x Sum) {
    switch(x.choice) {
      case SumA: {
        fmt.Printf("Value is %d", x.A.a)
      }
      case SumB: {        
        fmt.Printf("Value is \"%s\"", x.B.b)
      }
    }
  }
As usual with Go, it's ugly and verbose and error prone, but it just about works.

The previous poster was probably thinking of something similar but using interface{} (equivalent to Object or void*) instead of Sum, and then pattern matching using an explicit type switch:

  func foo (x interface{}) {
    switch actual := x.(type) {
      case AVariant: {
        fmt.Printf("Value is %d", actual.a)
      }
      case BVariant: {        
        fmt.Printf("Value is \"%s\"", actual.b)
      }
    }
  }
This is slightly less risky and has less ceremony, but it's also less clear which types foo() supports based on its signature. Since Go has only structural subtyping [0] for interfaces, you would have to add more boilerplate to use a marker interface instead.

[0] In Go, a type implements an interface if it has functions that match (name and signature) the interface functions. The name of the interface is actually irrelevant: any two interfaces with the same functions are equal. So if you declare an interface MyInterface with no methods, it is perfectly equivalent to the built-in `interface{}`: any type implements this interface, and any function which takes a MyInterface value can be called with any type in the program.


Thanks for the write up effort.

On the first approach, with "error prone" you mean the tag could be incorrect, right? (or even have an impossible variant with 0 or multiple variants set).


Yup. Also, there is no exhaustiveness checking (the constants we chose to define for SumChoice(0) and SumChoice(1) mean nothing to the compiler anyway - exhaustiveness checking would have you test any possible int, since SumChoice is simply an alias for int).


But are those two variants held in memory inside Sum or are they heap-allocated sitting out on some other cache line? Can one write a performant Sum type?


With pointers, they are in the heap, but at least there is no overhead. You could also declare them embedded (without the *s), but then the Sum struct would have a size that is a sum of all the variants' sizes, instead of a proper union which should have the max size (more or less).

  type Sum struct {
    type SumChoice
    A Avariant
    B Bvariant
  }
This is what I meant by saying that it depends on exactly what you mean by "sum types".


Got it. Although I’m not sure what “no overhead” means if the instances have to live way far away on the heap. That means you’ve got an alloc (including a mutex lock), then the value lives far away, then the garbage-collection overhead then the delete. When I think sum-type I think a flag plus storage for the maximum size and alignment type, and all of the above bookkeeping and cache misses go away.


Yes, you're right - I was thinking of "no space overhead", and even that is not entirely correct (you pay an extra pointer per variant size, which in this case would be some overhead over the embedded version).

Still, I think most people don't worry so much about efficient packing of sum types, and instead the safety and convenience factors are more important. Of course, YMMV based on exact usage.

I'm not in anyway claiming that Go supports sum types. Someone just asked how they may be emulated, and I don't think it should be surprising that emulation has an overhead.


> But I think a nice way to appreciate Go's conservative middleground is to go back to writing some C/C++ code for a while, like some Arduino projects. Coming from that direction, Go feels like a nice incremental improvement that doesn't try to do too much (with perhaps the exception of goroutines).

As someone that was writing Arduino like code in C++ in 1993 (MS-DOS), I fail to see that.


If in 2021 you think of “C/C++” as a single language, you are missing out on the last decade of C++.


C/C++ isn't a programming language, rather the abreviation for C and C++.

I am fully aware of the last decade of C++, including papers written by Bjarne Stroustrup and other C++ key figures, where they uses the hated C/C++ expression, that so many happen to have issues with.

Which I can gladly point you to.

Given that security is one of my areas, I always keep up to date with C and C++ standards, even if I don't use them every day.


I know the history, but I still think your comment was lumping them together unfairly in this context. With few exceptions, if you are writing `new` or `delete` you are doing C++ wrong in 2021 and that I would call C/C++ :-)


It wasn´t my comment, rather OP's quote.

As for not using new and delete, I kind of agree, then again the only C++ GUI worth using has it over the place.

I rather see all those school kids learning Arduino to use new and delete than malloc() error prone size calculations, or dive into template errors due to the misuse of smart pointers.


Hopefully concepts will save us from cryptic error messages.


> C/C++ isn't a programming language, rather the abreviation for C and C++.

Maybe the abbreviation should be `C & C++` then :-). But I'm not sure if infix precedences allow it to work.


> Giving maps a runtime penalty of random iteration in order just to punish devs who don't read docs is annoying

Can you explain what you mean by "runtime penalty?" Last I checked, the iteration is only random wrt its starting offset; after that, it proceeds linearly through the buckets. So you only generate one random integer per `range`, which seems acceptably cheap.


It still seems to be working as you described.

https://github.com/golang/go/blob/db8142fb8631df3ee56983cbc1...


But why would you be comparing to C? I've yet to find any scenario where Go is an option and e.g. OCaml isn't (unless the scenario is "I want to use a language whose creator called it a "systems language"").


Great summary of areas where Go could be improved for real world use. I’d want any of these improvements over delete at index in a slice (which I’ve never wanted).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: