[release/10.0] Fix ZIP64 header corruption when large file at large offset#124468
Open
github-actions[bot] wants to merge 4 commits intorelease/10.0from
Open
[release/10.0] Fix ZIP64 header corruption when large file at large offset#124468github-actions[bot] wants to merge 4 commits intorelease/10.0from
github-actions[bot] wants to merge 4 commits intorelease/10.0from
Conversation
When both AreSizesTooLarge and IsOffsetTooLarge are true, the Zip64ExtraField was being overwritten in the central directory header logic, losing the size information. This fixes the issue by reusing the existing Zip64ExtraField when adding the offset, using ??= instead of creating a new object. Fixes #114205 Co-authored-by: stephentoub <2642209+stephentoub@users.noreply.github.com>
Wrap buffer allocation in try-catch for OutOfMemoryException and throw SkipTestException to gracefully skip the test when insufficient memory is available. Co-authored-by: stephentoub <2642209+stephentoub@users.noreply.github.com>
Contributor
|
Tagging subscribers to this area: @karelz, @dotnet/area-system-io-compression |
karelz
approved these changes
Mar 6, 2026
Contributor
|
The PR was approved via email - I'll go ahead and add the |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Backport of #122837 to release/10.0
/cc @iremyux @copilot
Fixes #122489
Description
ZipArchiveproduces corrupted ZIP file when a 4GB+ file is written into archive which has already 4GB+ size.This caused 7-Zip to show
Extra_ERROR Zip64_ERRORandZipFile.OpenReadto throwInvalidDataException: A local file header is corrupton file produced by .NET.Customer Impact
Reported by the RavenDB in #122489, whose snapshot backups produced with
ZipArchivebecame unrecoverable due to ZIP header corruption.Regression
Yes. The issue was introduced in .NET10 with commit 44c08b8
Testing
LargeFile_At_LargeOffset_ZIP64_HeaderPreservationcovering the specific scenarioSystem.IO.Compressiontests passRisk
Low. Single-line logic change in a specific code path that only affects ZIP64 central directory header writing when both size and offset of entry exceed 4GB. We stop throwing away previously initialized Zip64ExtraField.