Must have been a roll-out of a new version to updates recently. Until we serve RPMs from our mirrors and our mirrors only gmontero inside and outside of container builds, we'll continue seeing this forever. Or we could try to change the yum backend to be more graceful here. I don't know enough about the environment to say for certain but I would be surprised if the caches or other dnf data were actually interacting between the two builds.
Obviously this shouldn't actually be a problem, but if there was some bug in dnf's regenerate-caches-from-scratch code, it wouldn't get seen much in normal operation since people don't normally "dnf clean all" so that might explain why we see this problem all the time but ordinary fedora users don't.
Ah, I see what you mean. Was it there just to reduce the size of the image? There's no cache in place, what we usually do in all our images is, we clean the cache after installation, see here. This is not a transient failure. I have this error today on one dev machine with one Docker image but not a different machine with the same Docker image The error appears to happen when there's some specific sort of problem on one of the fedora mirrors, which will then cause every "yum update" that hits that mirror to fail until eventually the mirror resyncs with the masters and fixes things.
If you look through the past instances of the flake, it tends to happen in bursts; it will happen 5 or 10 times in one day, and then not at all for a few weeks or months. If you reliably see it on one machine and not on another at a given time , it's just because of DNS caching; one of them has resolved "mirrors.
Stale issues rot after an additional 30d of inactivity and eventually close. We've pruned out dependencies on updaets and epel so this should be fixed. Still having this issue, works fine on my desktop, "Error: Failed to synchronize cache for repo 'updates'" on the server. Skip to content. Star 8. Code Issues Pull requests Security Insights. New issue. Jump to bottom. Copy link.
Contributor Author. Nero is telling you your PC is not feeding the data for the burn. Defragmenting your drive, closing other applications ,using good media and going for a coffee while the burn is in progress may solve your problem! Jeanc1 , Jul 29, I do know that if your burning, not to do anything else on the computer. The computer take alot of your resources when burning. I usually do control-alt-delete task manager and end other programs like a firewall program or other that would use a lot of resources no system running files.
I'm pretty knowledgable when it comes to this kind of stuff, but nobody seems to know why I still get the errors. Some of the problems comes from the type of media I use. It wont use anything else. Now when using Memorex its working 1 out of times. I have not done a defrag lately but I will run one now, then try and burn a backup movie that's on my HD.
I will post another thread with my results. Thankx for the input. Nope, Defraged both drives and still getting the stupid Nero error. Could be a hardware issue. If It's a hardware issue it's more likely going to be an issue with the RAM chips.
I'll be able to ascertain if this is it tomorrow when I get a new RAM chip that I ordered to resolve a different issue. Another possibility that seems to have come up quite frequently on other forums is the media, but I have some reservations about that theory because I have two spools of the same brand and almost all of the media on the first spool burned just fine.
That's a Buffer Underrun error. That being said, however, I have absolutely no idea how to prevent it. Have you thought of updating your nero to. Also, you might want to just drop the cached data from the centralized store when in the update path for a particular entity and then just let it be reloaded from the cache on the next request for that data.
This is IMO better than trying to do a true write-through cache where you write to the underlying store as well as the cache. The DB itself might make tweaks to the data via defaulting unsupplied values for example , and your cached data in that case might not match what's in the DB. A question was asked in the comments about the advantages of a centralized cache I'm guessing against something like an in memory distributed cache.
I'll provide my opinion on that, but first a standard disclaimer. Centralized caching is not a cure-all. It aims to solve specific issues related to in-jvm-memory caching. Before evaluating whether or not to switch to it, you should understand what your problems are first and see if they fit with the benefits of centralized caching.
Don't switch to it simple because someone says it's better than what you are doing. Make sure the reason fits the problem. Okay, now onto my opinion for what kinds of problems centralized caching can solve vs in-jvm-memory and possibly distributed caching. I'm going to list two things although I'm sure there are a few more. Let's start with Overall Memory Footprint.
Say you are doing standard entity caching to protect your relational DB from undue stress. Let's also say that you have a lot of data to cache in order to really protect your DB; say in the range of many GBs. In addition, you would then have to allocate a larger heap to your JVM in order to accommodate the cached data.
I'm from the opinion that the JVM heap should be small and streamlined in order to ease garbage collection burden. If you have a large chunks of Old Gen that can't be collected then your going to stress your garbage collector when it goes into a full GC and tries to reap something back from that bloated Old Gen space. You want to avoid long GC2 pause times and bloating your Old Gen is not going to help with that.
Plus, if you memory requirement is above a certain threshold, and you happened to be running 32 bit machines for your app layer, you'll have to upgrade to 64 bit machines and that can be another prohibitive cost. Now if you decided to centralize the cached data instead using something like Redis or Memcached , you could significantly reduce the overall memory footprint of the cached data because you could have it on a couple of boxes instead of all of the app server boxes in the app layer. You probably want to use a clustered approach both technologies support it and at least two servers to give you high availability and avoid a single point of failure in your caching layer more on that in a sec.
Also, you can tune the app boxes and the cache boxes differently now as they are serving distinct purposes. The app boxes can be tuned for high throughput and low heap and the cache boxes can be tuned for large memory. And having smaller heaps will definitely help out with overall throughput of the app layer boxes. Now one quick point for centralized caching in general. You should set up your application in such a way that it can survive without the cache in case it goes completely down for a period of time.
In traditional entity caching, this means that when the cache goes completely unavailable, you just are hitting your DB directly for every request. Not awesome, but also not the end of the world. Okay, now for Data Synchronization Issues. With distributed in-jvm-memory caching, you need to keep the cache in sync.
A change to cached data in one node needs to replicate to the other nodes and by sync'd into their cached data.
This approach is a little scary in that if for some reason network failure for example one of the nodes falls out of sync, then when a request goes to that node, the data the user sees will not be accurate against what's currently in the DB. Even worse, if they make another request and that hits a different node, they will see different data and that will be confusing to the user.
0コメント