Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions .github/workflows/cibuildwheel.yml
Original file line number Diff line number Diff line change
Expand Up @@ -79,7 +79,7 @@ jobs:
# These needs to rotate every new Python release.
run: |
set -x
echo "CIBW_BUILD=cp310-* cp311-* cp314-*" >> $GITHUB_ENV
echo "CIBW_BUILD=cp310-* cp311-* cp314-* cp314t-*" >> $GITHUB_ENV
set +x

if: ${{ github.event_name }} == "pull_request"
Expand Down Expand Up @@ -170,7 +170,7 @@ jobs:
uses: pypa/cibuildwheel@v3.3.0
env:
CIBW_ARCHS: ARM64
CIBW_SKIP: "cp310-* cp314t-*"
CIBW_SKIP: "cp310-*"

- uses: actions/upload-artifact@v6
with:
Expand Down
6 changes: 5 additions & 1 deletion Changelog
Original file line number Diff line number Diff line change
@@ -1,9 +1,13 @@
version 1.7.4 (not yet released)
version 1.7.4 (tag v1.7.4rel)
================================
* Make sure automatic conversion of character arrays <--> string arrays works for Unicode strings (issue #1440).
(previously only worked correctly for encoding="ascii").
* Add netcdf plugins (blosc, zstd, bzip2) in wheels. Blosc plugin doesn't work in Windows wheels.
Macos wheels now use conda provided libs. (PR #1450)
* Add windows/arm (PR #1453) and free-threaded python wheels (issue #1454). Windows wheels now use netcdf-c 4.9.3.
WARNING: netcdf-c is not thread-safe and netcdf4-python does have internal locking so expect segfaults if you
use netcdf4-python on multiple threads with free-threaded python. Users must exercise care to only call netcdf from
a single thread.

version 1.7.3 (tag v1.7.3rel)
=============================
Expand Down
6 changes: 6 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,12 @@
## News
For details on the latest updates, see the [Changelog](https://github.com/Unidata/netcdf4-python/blob/master/Changelog).

1/5/2026: Version [1.7.4](https://pypi.python.org/pypi/netCDF4/1.7.4) released. Compression plugins now included in wheels, windows/arm and
free-threaded python wheels provided. Automatic conversion of character arrays <--> string arrays works for Unicode (not just ascii) strings.
WARNING: netcdf-c is not thread-safe and netcdf4-python does have internal locking so expect segfaults if you
use netcdf4-python on multiple threads with free-threaded python. Users must exercise care to only call netcdf from
a single thread.

10/13/2025: Version [1.7.3](https://pypi.python.org/pypi/netCDF4/1.7.3) released. Minor updates/bugfixes and python 3.14 wheels, see Changelog for details.

10/22/2024: Version [1.7.2](https://pypi.python.org/pypi/netCDF4/1.7.2) released. Minor updates/bugfixes and python 3.13 wheels, see Changelog for details.
Expand Down
48 changes: 34 additions & 14 deletions docs/index.html

Large diffs are not rendered by default.

2 changes: 1 addition & 1 deletion pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -85,6 +85,7 @@ pythonpath = ["test"]
filterwarnings = [
"error",
"ignore::UserWarning",
"ignore::RuntimeWarning",
]

[tool.mypy]
Expand All @@ -110,7 +111,6 @@ build-verbosity = 1
build-frontend = "build"
skip = [
"*-musllinux*",
"cp314t-*",
]
test-extras = "tests"
test-sources = [
Expand Down
4 changes: 4 additions & 0 deletions src/netCDF4/_netCDF4.pyx
Original file line number Diff line number Diff line change
Expand Up @@ -1050,6 +1050,10 @@ are collective. There are a couple of important limitations of parallel IO:
to write to it.
- You cannot use variable-length (VLEN) data types.

***Import warning regarding threads:*** The underlying netcdf-c library is not thread-safe, so netcdf4-python cannot perform parallel
IO in a multi-threaded environment. Users should expect segfaults if a netcdf file is opened on multiple threads - care should
be taken to restrict netcdf4-python usage to a single thread, even when using free-threaded python.

## Dealing with strings

The most flexible way to store arrays of strings is with the
Expand Down
3 changes: 1 addition & 2 deletions test/test_masked2.py
Original file line number Diff line number Diff line change
Expand Up @@ -64,8 +64,7 @@ def setUp(self):
v = f.createVariable('v',np.float32,'x',zlib=True,least_significant_digit=1)
# assign masked array to that variable with one missing value.
data =\
ma.array([1.5678,99.99,3.75145,4.127654],mask=np.array([False,True,False,False],np.bool_))
data.mask[1]=True
ma.MaskedArray([1.5678,99.99,3.75145,4.127654],mask=np.array([False,True,False,False],np.bool_))
v[:] = data
f.close()

Expand Down
2 changes: 1 addition & 1 deletion test/test_masked3.py
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ def setUp(self):

self.fillval = default_fillvals["i2"]
self.v = np.array([self.fillval, 5, 4, -9999], dtype = "i2")
self.v_ma = ma.array([self.fillval, 5, 4, -9999], dtype = "i2", mask = [True, False, False, True])
self.v_ma = ma.MaskedArray([self.fillval, 5, 4, -9999], dtype = "i2", mask = [True, False, False, True])

self.scale_factor = 10.
self.add_offset = 5.
Expand Down
2 changes: 1 addition & 1 deletion test/test_masked4.py
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ def setUp(self):
self.valid_max = 32765
self.valid_range = [self.valid_min,self.valid_max]
self.v = np.array([self.valid_min-1, 5, 4, self.valid_max+1], dtype = "i2")
self.v_ma = ma.array([self.valid_min-1, 5, 4, self.valid_max+1], dtype = "i2", mask = [True, False, False, True])
self.v_ma = ma.MaskedArray([self.valid_min-1, 5, 4, self.valid_max+1], dtype = "i2", mask = [True, False, False, True])

self.scale_factor = 10.
self.add_offset = 5.
Expand Down
2 changes: 1 addition & 1 deletion test/test_masked5.py
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ def setUp(self):

self.missing_values = [-999,999,0]
self.v = np.array([-999,0,1,2,3,999], dtype = "i2")
self.v_ma = ma.array([-1,0,1,2,3,4], dtype = "i2", \
self.v_ma = ma.MaskedArray([-1,0,1,2,3,4], dtype = "i2", \
mask = [True, True, False, False, False, True])

f = Dataset(self.testfile, 'w')
Expand Down
2 changes: 1 addition & 1 deletion test/test_masked6.py
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ def setUp(self):
self.testfile = tempfile.NamedTemporaryFile(suffix='.nc', delete=False).name

self.v = np.array([4, 3, 2, 1], dtype="i2")
self.w = np.ma.array([-1, -2, -3, -4], mask=[False, True, False, False], dtype="i2")
self.w = np.ma.MaskedArray([-1, -2, -3, -4], mask=[False, True, False, False], dtype="i2")

f = Dataset(self.testfile, 'w')
_ = f.createDimension('x', None)
Expand Down
2 changes: 1 addition & 1 deletion test/test_scaled.py
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ def setUp(self):
self.missing_value = -9999

self.v = np.array([0, 5, 4, self.missing_value], dtype = "i2")
self.v_ma = ma.array([0, 5, 4, self.missing_value], dtype = "i2",
self.v_ma = ma.MaskedArray([0, 5, 4, self.missing_value], dtype = "i2",
mask = [True, False, False, True], fill_value = self.fillval)

self.scale_factor = 10.
Expand Down
2 changes: 1 addition & 1 deletion test/test_types.py
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@
zlib=False; complevel=0; shuffle=False; least_significant_digit=None
datatypes = ['f8','f4','i1','i2','i4','i8','u1','u2','u4','u8','S1']
FillValue = 1.0
issue273_data = np.ma.array(['z']*10,dtype='S1',\
issue273_data = np.ma.MaskedArray(['z']*10,dtype='S1',\
mask=[False,False,False,False,False,True,False,False,False,False])

class PrimitiveTypesTestCase(unittest.TestCase):
Expand Down