Hi Friends,Thanks for all your hard work on 1.7 so far.
A couple of days ago, Damian Gryski tweeted a request for folks to canary 1.7 sooner rather than later, so I thought I would share some graphs from one of our highest request-rate production services. These images are from https://github.com/golang/go/commit/eeca3ba92fdb07e44abf3e2bebfcede03e1eae12 which was current master at the time I built.
In each image, the blue line represents the canary host:
There didn't seem to be a notable change in the overall system memory profile of this service, so I am not sure what this change truly represents.Next up is GC pause time, which seems to be further improved from 3-4ms to 1-2ms for this service.I'm seeing the above performance difference on other endpoints as well. The 3 "buttes" between 16:00 and 22:00 are not related to the Go version change; the improved performance is.First up is the per-request p95 performance for the most-called endpoint in this service. This route is intended to be fast. The timing (y axis) is in ms, so what you see here represents about an 8% improvement, down to ~40µs response times:For a little extra context, this is the same service referenced in previously shared GC performance improvements:
https://twitter.com/brianhatfield/status/634166123605331968
https://twitter.com/brianhatfield/status/692778741567721473Finally, the most changed metric is one I don't really understand, which is mspan_inuse:
The overall system load does not appear to have meaningfully changed as a result of the observed performance improvements, which is a little surprising - I expected an observable reduction in line with the GC and request performance improvements (about 5-10%?).
Brian HatfieldThanks again for all your work,As far as compile times, they were already and continue to be excellent for our use cases, so I will leave benchmarking that up to Dave Cheney :-)I'll track for bugs or other issues and report them on Github as observed.
I'm happy to provide more information as requested.
--
You received this message because you are subscribed to the Google Groups "golang-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to golang-dev+...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Thanks for the performance updates! We really appreciate getting some visibility into the impact of our changes on other peoples' systems. End-to-end latency is particularly informative since it's not just the runtime measuring itself (which, while informative, can very easily be misleading).A few questions and thoughts inline below.On Fri, May 6, 2016 at 5:47 AM, Brian Hatfield <bmhat...@gmail.com> wrote:Hi Friends,Thanks for all your hard work on 1.7 so far.
A couple of days ago, Damian Gryski tweeted a request for folks to canary 1.7 sooner rather than later, so I thought I would share some graphs from one of our highest request-rate production services. These images are from https://github.com/golang/go/commit/eeca3ba92fdb07e44abf3e2bebfcede03e1eae12 which was current master at the time I built.
In each image, the blue line represents the canary host:Which release are the non-canaries running? I ask mainly because there was a bug fix in 1.6.2 (2644b76) that could significantly improve end-to-end latency in some situations (pre-1.6.2, if the stars aligned, the sweeper became effectively STW).
There didn't seem to be a notable change in the overall system memory profile of this service, so I am not sure what this change truly represents.Next up is GC pause time, which seems to be further improved from 3-4ms to 1-2ms for this service.I'm seeing the above performance difference on other endpoints as well. The 3 "buttes" between 16:00 and 22:00 are not related to the Go version change; the improved performance is.First up is the per-request p95 performance for the most-called endpoint in this service. This route is intended to be fast. The timing (y axis) is in ms, so what you see here represents about an 8% improvement, down to ~40µs response times:For a little extra context, this is the same service referenced in previously shared GC performance improvements:
https://twitter.com/brianhatfield/status/634166123605331968
https://twitter.com/brianhatfield/status/692778741567721473Finally, the most changed metric is one I don't really understand, which is mspan_inuse:In the text you said mspan_inuse, but the plot is labeled "stack MSpan inuse". Is this MemStats.StackInuse or MemStats.MSpanInuse? If this is StackInUse, it may just be that SSA produces very different stack layouts than the 1.6 compiler and may have produced a larger stack frame for some key functions. Since stacks are always power-of-two sized, if you're close to the stack boundary in some common code path used by lots of goroutines and the stack size of that path grows just enough to push it over the power of two boundary, it can have an outsized effect on overall stack allocation.If the overall memory footprint didn't notably changed, this is probably nothing to worry about.
The overall system load does not appear to have meaningfully changed as a result of the observed performance improvements, which is a little surprising - I expected an observable reduction in line with the GC and request performance improvements (about 5-10%?).Can you say more about how you're measuring system load?
I expected an observable reduction in line with the GC and request performance improvements (about 5-10%?).
--
You received this message because you are subscribed to the Google Groups "golang-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to golang-dev+unsubscribe@googlegroups.com.