Interested in Linux, FOSS, data storage systems, unfucking our society and a bit of gaming.

I help maintain Nixpkgs.

https://github.com/Atemu
https://reddit.com/u/Atemu12 (Probably won’t be active much anymore.)

  • 123 Posts
  • 1.54K Comments
Joined 4 years ago
cake
Cake day: June 25th, 2020

help-circle

  • Atemu@lemmy.mlOPtonixos@lemmy.mlNixOS 24.05 released
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    20 days ago

    As always, stable releases are about how frequently breaking changes are introduced. If breaking changes potentially happening every day is fine for you, you can use unstable. For many use-cases however, you want some agency over when exactly breaking changes are introduced as point releases a la NixOS provide you with a 1 month window to migrate for each release.








  • Atemu@lemmy.mltoPrivacy@lemmy.mlLegitimate interest?
    link
    fedilink
    arrow-up
    11
    arrow-down
    1
    ·
    2 months ago

    Your browser cannot block server-side abuse of your personal data. These consent forms are not about cookies; they’re about fooling users into consenting to abuse of their personal data. Cookies are just one of many many technological measures required to carry out said human rights abuse.







  • Das Ding mit den Anleihen ist, dass die Zinsen lange Zeit niedrig bis negativ waren und dadurch nur die jüngsten, kurzfristigsten gerade so Zentralbank Rendite abwerfen, also Eur ultrashort bonds oder Eur floating rate bonds (die sind zumindest von Banken).

    Ich will damit ja auch nicht wirklich Rendite machen, ich will mich nur gegen die Inflation absichern. Für Rendite hab ich Equity Fonds.

    floating rate bonds

    Klingt auch nicht schlecht, muss ich mir mal anschauen. Hat halt ne große TD zur Inflation weil die BenchmarkRaten ja immer nur als Reaktion auf Änderungen der Inflation angepasst werden.

    Ich nehme mal nicht an, dass DE sowas rausgibt? Suchmaschine findet nur die Inflation-linked bonds.

    Gibt es da Index Fonds für?



  • Gegen Bond ETFs hätte ich Grunde nichts dagegen, aber was du da verlinkt hast ist ein ETF für einen Globalen Bonds Index. Die Bonds sind A. nicht inflationsindexiert und B. von nicht wirklich vertrauenswürdigen Herausgebern wie China, UK oder Italien. Sowas will ich nicht.

    ist das größte Risiko der Zinssatz der Zentralbanken und die Lage der Weltpolitik.

    Das ist klar; für mich wäre das eine Alternative zum Tagesgeld für welches das ja direkt oder indirekt auch gilt.

    Was ich hier haben möchte ist ein Asset, das sehr nah an die Inflation gebunden ist (idealerweise direkt) und mit ihr schwingt, aber halt nicht die unglaublich krassen swings von Equity mitmacht. Jetzt grade die letzte Woche sind globale Aktien-Indices z.B. um 3-4% runter. Wir wissen alle, das sich das sehr wahrscheinlich in ein paar Monaten wieder hat und deshalb halten wir für langfristiges Wachstum ETFs auf diese Indizes, aber wenn ich jetzt dringend liquidität brauchen würde, müsste ich die paar Prozent halt fressen und da hätte ich halt lieber dieses andere Asset, das ich stattdessen verkaufen könnte.
    Ich glaube nicht, dass inflationsindexierte Anleihen alleine dieses Asset sein können, aber sie könnten zumindest einen Teil davon bilden.









  • Atemu@lemmy.mltoLinux@lemmy.mlThoughts on CachyOS?
    link
    fedilink
    arrow-up
    4
    ·
    2 months ago

    v3 is worth it though

    [citation needed]

    Sometimes the improvements are not apparent by normal benchmarks, but would have an overall impact - for instance, if you use filesystem compression, with the optimisations it means you now have lower I/O latency, and so on.

    Those would show up in any benchmark that is sensitive to I/O latency.

    Also, again, [citation needed] that march optimisations measurably lower I/O latency for compressed I/O. For that to happen it is a necessary condition that compression is a significant component in I/O latency to begin with. If 99% of the time was spent waiting for the device to write the data, optimising the 1% of time spent on compression by even as much as 20% would not gain you anything of significance. This is obviously an exaggerated example but, given how absolutely dog slow most I/O devices are compared to how fast CPUs are these days, not entirely unrealistic.

    Generally, the effect of such esoteric “optimisations” is so small that the length of your unix username has a greater effect on real-world performance. I wish I was kidding.
    You have to account for a lot of variables and measurement biases if you want to make factual claims about them. You can observe performance differences on the order of 5-10% just due to a slight memory layout changes with different compile flags, without any actual performance improvement due to the change in code generation.

    That’s not my opinion, that’s rather well established fact. Read here:

    So far, I have yet to see data that shows a significant performance increase from march optimisations which either controlled for the measurement bias or showed an effect that couldn’t be explained by measurement bias alone.

    There might be an improvement and my personal hypothesis is that there is at least a small one but, so far, we don’t actually know.

    More importantly, if you’re a laptop user, this could mean better battery life since using more efficient instructions, so certain stuff that might’ve taken 4 CPU cycles could be done in 2 etc.

    The more realistic case is that an execution that would have taken 4 CPU cycles on average would then take 3.9 CPU cycles.

    I don’t have data on how power scales with varying cycles/task at a constant task/time but I doubt it’s linear, especially with all the complexities surrounding speculative execution.

    In my own experience on both my Zen 2 and Zen 4 machines, v3/v4 packages made a visible difference.

    “visible” in what way? March optimisations are hardly visible in controlled synthetic tests…

    It really doesn’t make sense that you’re spending so much money buying a fancy CPU, but not making use of half of its features…

    These features cater towards specialised workloads, not general purpose computing.

    Applications which facilitate such specialised workloads and are performance-critical usually have hand-made assembly for the critical paths where these specialised instructions can make a difference. Generic compiler optimisations will do precisely nothing to improve performance in any way in that case.

    I’d worry more about your applications not making any use of all the cores you’ve paid good money for. Spoiler alert: Compiler optimisations don’t help with that problem one bit.


  • I’d define “bloat” as functionality (as in: program code) present on my system that I cannot imagine ever needing to use.

    There will never be a system that is perfectly tailored to my needs because there will always be some piece of functional code that I have no intention of using. Therefore, any system is “bloated” and it’s a question to which degree it is “bloated”.

    The degree depends on which kind of resources the “bloat” uses and how much of it. The more significant the resource usage, the more significant the effect of the “bloat”. The kind of resource is used defines how critical some amount of usage is. 5% Power, CPU, IO, RAM or disk usage have varying degrees of criticality for instance.

    Some examples:

    This system has a calendar app installed by default. I don’t use it, so it’s certainly bloat but I also don’t care because it’s just a few megs on disk at worst and that doesn’t hurt me in any way.

    Firefox frequently uses most of my RAM and >1% CPU util at “idle” but it’s a useful application that I use all the time, so it’s not bloat.

    The most critical resource usage of systemd (pid1) on my system is RAM which is <0.1%. It provides tonnes of essential features required on a modern system and therefore not even worth thinking about when it comes to bloat.

    I just noticed that mbrola voices sneaked into my closure again which is like 700MiB of voice synthesis data for many languages that I don’t have a need for. Quite a lot of storage for something I don’t ever need. This is significant bloat. It appears Firefox is drawing it in but it looks like that can be disabled via an override, so I’ll do that right now.