Aussie living in the San Francisco Bay Area.
Coding since 1998.
.NET Foundation member. C# fan
https://d.sb/
Mastodon: @[email protected]

  • 4 Posts
  • 1.49K Comments
Joined 2 years ago
cake
Cake day: June 14th, 2023

help-circle




  • dan@upvote.autolinuxmemes@lemmy.worldto my expierence
    link
    fedilink
    arrow-up
    8
    ·
    edit-2
    16 hours ago

    Not just DuckDuckGo - the majority of search engines and voice assistants that aren’t Google use data from Bing. It’s the largest search engine that has a public API. Even search engines that have their own index usually use Bing to supplement their results.






  • dan@upvote.autoTechnology@lemmy.worldCloudfare outage post mortem
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    3 days ago

    I’m a fan of BunnyCDN - somehow they’re one of the fastest while also being one of the cheapest, and they’re based in Europe (Slovenia).

    KeyCDN is good too, and they’re also Europe-based (Switzerland), but they have a higher minimum monthly spend of $4 instead of $1 at Bunny.

    Fastly have a free tier with 100GB per month, but bandwidth pricing is noticeably higher than Bunny and KeyCDN once you exceed that.

    https://www.cdnperf.com/ is useful for comparing performance. They don’t list every CDN though.

    Some CDN providers are focused only on large enterprise customers, and it shows in their pricing.



  • dan@upvote.autoTechnology@lemmy.worldCloudfare outage post mortem
    link
    fedilink
    English
    arrow-up
    10
    ·
    edit-2
    3 days ago

    there really isn’t much in the way of an alternative

    Bunny.net covers some of the use cases, like DNS and CDN. I think they just rolled out a WAF too.

    There’s also the “traditional” providers like AWS, Akamai, etc. and CDN providers like KeyCDN and CDN77.

    I guess one of the appeals of Cloudflare is that it’s one provider for everything, rather than having to use a few different providers?


  • dan@upvote.autoTechnology@lemmy.worldCloudfare outage post mortem
    link
    fedilink
    English
    arrow-up
    10
    ·
    edit-2
    3 days ago

    This can happen regardless of language.

    The actual issue is that they should be canarying changes. Push them to a small percentage of servers, and ensure nothing bad happens before pushing them more broadly. At my workplace, config changes are automatically tested on one server, then an entire rack, then an entire cluster, before fully rolling out. The rollout process watches the core logs for things like elevated HTTP 5xx errors.


  • dan@upvote.autoTechnology@lemmy.worldCloudfare outage post mortem
    link
    fedilink
    English
    arrow-up
    26
    ·
    edit-2
    3 days ago

    Did you read the article? It wasn’t taken down by the number of bots, but by the number of columns:

    In this specific instance, the Bot Management system has a limit on the number of machine learning features that can be used at runtime. Currently that limit is set to 200, well above our current use of ~60 features. Again, the limit exists because for performance reasons we preallocate memory for the features.

    When the bad file with more than 200 features was propagated to our servers, this limit was hit — resulting in the system panicking.

    They had some code to get a list of the database columns in the schema, but it accidentally wasn’t filtering by database name. This worked fine initially because the database user only had access to one DB. When the user was granted access to another DB, it started seeing way more columns than it expected.