[ VIGRA Homepage | Function Index | Class Index | Namespaces | File List | Main Page ]

multi_array_chunked.hxx VIGRA

1 /************************************************************************/
2 /* */
3 /* Copyright 2012-2014 by Ullrich Koethe and Thorben Kroeger */
4 /* */
5 /* This file is part of the VIGRA computer vision library. */
6 /* The VIGRA Website is */
7 /* http://hci.iwr.uni-heidelberg.de/vigra/ */
8 /* Please direct questions, bug reports, and contributions to */
9 /* ullrich.koethe@iwr.uni-heidelberg.de or */
10 /* vigra@informatik.uni-hamburg.de */
11 /* */
12 /* Permission is hereby granted, free of charge, to any person */
13 /* obtaining a copy of this software and associated documentation */
14 /* files (the "Software"), to deal in the Software without */
15 /* restriction, including without limitation the rights to use, */
16 /* copy, modify, merge, publish, distribute, sublicense, and/or */
17 /* sell copies of the Software, and to permit persons to whom the */
18 /* Software is furnished to do so, subject to the following */
19 /* conditions: */
20 /* */
21 /* The above copyright notice and this permission notice shall be */
22 /* included in all copies or substantial portions of the */
23 /* Software. */
24 /* */
25 /* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND */
26 /* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES */
27 /* OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND */
28 /* NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT */
29 /* HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, */
30 /* WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING */
31 /* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR */
32 /* OTHER DEALINGS IN THE SOFTWARE. */
33 /* */
34 /************************************************************************/
35 
36 /* benchmark results for a simple loop 'if(iter.get<1>() != count++)'
37 
38  ********************
39  image size: 200^3, chunk size: 64^3, i.e. chunk count: 4^3
40  times in msec, excluding time to store file on disk
41 
42  win64/vs2012 (koethe-laptop): uint8 float double
43  plain array 18 18 18
44  chunked array (all in cache) 25 26 26
45  thread-safe chunked (all in cache) 27 28 29
46  thread-safe chunked (1 slice in cache) 29 33 39
47  thread-safe chunked (1 row in cache) 45 48 52
48  chunked (initial creation, all in cache) 33 43 57
49 
50  linux/gcc 4.7.3 (birdofprey): uint8 float double
51  plain array 16 20 21
52  chunked array (all in cache) 17 23 24
53  thread-safe chunked (all in cache) 19 24 25
54  thread-safe chunked (1 slice in cache) 20 29 34
55  thread-safe chunked (1 row in cache) 24 33 39
56  chunked (initial creation, all in cache) 22 34 48
57 
58  OS X 10.7: uint8 float double
59  plain array 11 22 24
60  chunked array (all in cache) -- -- --
61  thread-safe chunked (all in cache) 20 25 26
62  thread-safe chunked (1 slice in cache) 23 37 46
63  thread-safe chunked (1 row in cache) 32 50 56
64  chunked (initial creation, all in cache) 34 56 77
65  (These numbers refer to nested loop iteration. Scan-order iteration
66  is unfortunately 3.5 times slower on the Mac. On the other hand,
67  two-level indexing as faster on a Mac than on Linux and Windows --
68  the speed penalty is only a factor of 2 rather than 3.)
69 
70  **********************
71  image size: 400^3, chunk size: 127^3, i.e. chunk count: 4^3
72  times in msec, excluding time to store file on disk
73 
74  win64/vs2012 (koethe-laptop): uint8 float double
75  plain array 130 130 130
76  chunked array (all in cache) 190 190 200
77  thread-safe chunked (all in cache) 190 200 210
78  thread-safe chunked (1 slice in cache) 210 235 280
79  thread-safe chunked (1 row in cache) 240 270 300
80  chunked (initial creation, all in cache) 230 300 400
81 
82  linux/gcc 4.7.3 (birdofprey): uint8 float double
83  plain array 130 162 165
84  chunked array (all in cache) 131 180 184
85  thread-safe chunked (all in cache) 135 183 188
86  thread-safe chunked (1 slice in cache) 146 218 258
87  thread-safe chunked (1 row in cache) 154 229 270
88  chunked (initial creation, all in cache) 173 269 372
89 
90  ***********************
91  Compression:
92  * I tried ZLIB, LZO, SNAPPY, LZ4, LZFX and FASTLZ (with compression levels 1 -- faster
93  and level 2 -- higher compression). There are also QuickLZ and LZMAT which claim
94  to be fast, but they are under a GPL license.
95  * ZLIB compresses best, but is quite slow even at compression level 1
96  (times are in ms and include compression and decompression).
97  byte float double
98  ZLIB 121 3100 5800
99  ZLIB1 79 1110 1190
100  LZO 43 104 280
101  SNAPPY 46 71 305
102  LZ4 42 70 283
103  LZFX 76 278 330
104  FASTLZ1 52 280 309
105  FASTLZ1 53 286 339
106  * The fast compression algorithms are unable to compress the float array
107  and achieve ~50% for the double array, whereas ZLIB achieves 32% and 16%
108  respectively (at the fastest compression level 1, it is still 33% and 17%
109  respectively). LZFX cannot even compress the byte data (probably a bug?).
110  Average compression ratios for the byte array are
111  ZLIB: 2.3%
112  ZLIB1: 4.6%
113  LZO: 5.6%
114  SNAPPY: 9.8%
115  LZ4: 9.7%
116  FASTLZ1: 7.6%
117  FASTLZ2: 7.9%
118  * LZO is under GPL (but there is a Java implementation under Apache license at
119  http://svn.apache.org/repos/asf/hadoop/common/tags/release-0.19.2/src/core/org/apache/hadoop/io/compress/lzo/)
120  The others are BSD and MIT (FASTLZ).
121  * Snappy doesn't support Windows natively, but porting is simple (see my github repo)
122  * The source code for LZO, LZ4, LZFX, and FASTLZ can simply be copied to VIGRA,
123  but LZO's GPL license is unsuitable.
124  * HDF5 compression is already sufficient at level 1 (4-15%,
125  higher levels don't lead to big gains) and only a factor 3-10 slower
126  than without compression.
127 */
128 
129 #ifndef VIGRA_MULTI_ARRAY_CHUNKED_HXX
130 #define VIGRA_MULTI_ARRAY_CHUNKED_HXX
131 
132 #include <queue>
133 #include <string>
134 
135 #include "multi_fwd.hxx"
136 #include "multi_handle.hxx"
137 #include "multi_array.hxx"
138 #include "memory.hxx"
139 #include "metaprogramming.hxx"
140 #include "threading.hxx"
141 #include "compression.hxx"
142 
143 // // FIXME: why is this needed when compiling the Python bindng,
144 // // but not when compiling test_multiarray_chunked?
145 // #if defined(__GNUC__)
146 // # define memory_order_release memory_order_seq_cst
147 // # define memory_order_acquire memory_order_seq_cst
148 // #endif
149 
150 #ifdef _WIN32
151 # include "windows.h"
152 #else
153 # include <fcntl.h>
154 # include <stdlib.h>
155 # include <unistd.h>
156 # include <sys/stat.h>
157 # include <sys/mman.h>
158 # include <cstdio>
159 #endif
160 
161 // Bounds checking Macro used if VIGRA_CHECK_BOUNDS is defined.
162 #ifdef VIGRA_CHECK_BOUNDS
163 #define VIGRA_ASSERT_INSIDE(diff) \
164  vigra_precondition(this->isInside(diff), "Index out of bounds")
165 #else
166 #define VIGRA_ASSERT_INSIDE(diff)
167 #endif
168 
169 namespace vigra {
170 
171 #ifdef __APPLE__
172  #define VIGRA_NO_SPARSE_FILE
173 #endif
174 
175 #ifdef _WIN32
176 
177 inline
178 void winErrorToException(std::string message = "")
179 {
180  LPVOID lpMsgBuf;
181  DWORD dw = GetLastError();
182 
183  FormatMessage(
184  FORMAT_MESSAGE_ALLOCATE_BUFFER |
185  FORMAT_MESSAGE_FROM_SYSTEM |
186  FORMAT_MESSAGE_IGNORE_INSERTS,
187  NULL,
188  dw,
189  MAKELANGID(LANG_NEUTRAL, SUBLANG_DEFAULT),
190  (LPTSTR) &lpMsgBuf,
191  0, NULL );
192 
193  message += (char*)lpMsgBuf;
194  LocalFree(lpMsgBuf);
195 
196  throw std::runtime_error(message);
197 }
198 
199 inline
200 std::string winTempFileName(std::string path = "")
201 {
202  if(path == "")
203  {
204  TCHAR default_path[MAX_PATH];
205  if(!GetTempPath(MAX_PATH, default_path))
206  winErrorToException("winTempFileName(): ");
207  path = default_path;
208  }
209 
210  TCHAR name[MAX_PATH];
211  if(!GetTempFileName(path.c_str(), TEXT("vigra"), 0, name))
212  winErrorToException("winTempFileName(): ");
213 
214  return std::string(name);
215 }
216 
217 inline
218 std::size_t winClusterSize()
219 {
220  SYSTEM_INFO info;
221  ::GetSystemInfo(&info);
222  return info.dwAllocationGranularity;
223 }
224 
225 #endif
226 
227 namespace {
228 
229 #ifdef _WIN32
230 std::size_t mmap_alignment = winClusterSize();
231 #else
232 std::size_t mmap_alignment = sysconf(_SC_PAGE_SIZE);
233 #endif
234 
235 } // anonymous namespace
236 
237 template <unsigned int N, class T>
238 class IteratorChunkHandle;
239 
240 namespace detail {
241 
242 template <unsigned int N>
243 struct ChunkIndexing
244 {
245  template <class T, int M>
246  static void chunkIndex(TinyVector<T, M> const & p,
247  TinyVector<T, M> const & bits,
248  TinyVector<T, M> & index)
249  {
250  typedef std::size_t UI;
251  ChunkIndexing<N-1>::chunkIndex(p, bits, index);
252  index[N-1] = (UI)p[N-1] >> bits[N-1];
253  }
254 
255  template <class T, int M>
256  static std::size_t chunkOffset(TinyVector<T, M> const & p,
257  TinyVector<T, M> const & bits,
258  TinyVector<T, M> const & strides)
259  {
260  typedef std::size_t UI;
261  return ChunkIndexing<N-1>::chunkOffset(p, bits, strides) +
262  ((UI)p[N-1] >> bits[N-1]) * strides[N-1];
263  }
264 
265  template <class T, int M>
266  static std::size_t offsetInChunk(TinyVector<T, M> const & p,
267  TinyVector<T, M> const & mask,
268  TinyVector<T, M> const & strides)
269  {
270  typedef std::size_t UI;
271  return ChunkIndexing<N-1>::offsetInChunk(p, mask, strides) +
272  ((UI)p[N-1] & (UI)mask[N-1]) * strides[N-1];
273  }
274 };
275 
276 template <>
277 struct ChunkIndexing<1>
278 {
279  template <class T, int M>
280  static void chunkIndex(TinyVector<T, M> const & p,
281  TinyVector<T, M> const & bits,
282  TinyVector<T, M> & index)
283  {
284  typedef std::size_t UI;
285  index[0] = (UI)p[0] >> bits[0];
286  }
287 
288  template <class T, int M>
289  static std::size_t chunkOffset(TinyVector<T, M> const & p,
290  TinyVector<T, M> const & bits,
291  TinyVector<T, M> const & strides)
292  {
293  typedef std::size_t UI;
294  return ((UI)p[0] >> bits[0]) * strides[0];
295  }
296 
297  template <class T, int M>
298  static std::size_t offsetInChunk(TinyVector<T, M> const & p,
299  TinyVector<T, M> const & mask,
300  TinyVector<T, M> const & strides)
301  {
302  typedef std::size_t UI;
303  return ((UI)p[0] & (UI)mask[0]) * strides[0];
304  }
305 };
306 
307 template <class T, int M>
308 inline TinyVector<T, M>
309 computeChunkArrayShape(TinyVector<T, M> shape,
310  TinyVector<T, M> const & bits,
311  TinyVector<T, M> const & mask)
312 {
313  for(int k=0; k<M; ++k)
314  shape[k] = (shape[k] + mask[k]) >> bits[k];
315  return shape;
316 }
317 
318 template <class T, int M>
319 inline T
320 defaultCacheSize(TinyVector<T, M> const & shape)
321 {
322  T res = max(shape);
323  for(int k=0; k<M-1; ++k)
324  for(int j=k+1; j<M; ++j)
325  res = std::max(res, shape[k]*shape[j]);
326  return res + 1;
327 }
328 
329 } // namespace detail
330 
331 template <unsigned int N, class T>
332 class ChunkBase
333 {
334  public:
335  typedef typename MultiArrayShape<N>::type shape_type;
336  typedef T value_type;
337  typedef T* pointer;
338 
339  ChunkBase()
340  : strides_()
341  , pointer_()
342  {}
343 
344  ChunkBase(shape_type const & strides, pointer p = 0)
345  : strides_(strides)
346  , pointer_(p)
347  {}
348 
349  typename MultiArrayShape<N>::type strides_;
350  T * pointer_;
351 };
352 
353 template <unsigned int N, class T>
354 class SharedChunkHandle
355 {
356  public:
357  typedef typename MultiArrayShape<N>::type shape_type;
358 
359  static const long chunk_asleep = -2;
360  static const long chunk_uninitialized = -3;
361  static const long chunk_locked = -4;
362  static const long chunk_failed = -5;
363 
364  SharedChunkHandle()
365  : pointer_(0)
366  , chunk_state_()
367  {
368  chunk_state_ = chunk_uninitialized;
369  }
370 
371  SharedChunkHandle(SharedChunkHandle const & rhs)
372  : pointer_(rhs.pointer_)
373  , chunk_state_()
374  {
375  chunk_state_ = chunk_uninitialized;
376  }
377 
378  shape_type const & strides() const
379  {
380  return pointer_->strides_;
381  }
382 
383  ChunkBase<N, T> * pointer_;
384  mutable threading::atomic_long chunk_state_;
385 
386  private:
387  SharedChunkHandle & operator=(SharedChunkHandle const & rhs);
388 };
389 
390 template <unsigned int N, class T>
391 class ChunkedArrayBase
392 {
393  public:
394  enum ActualDimension{ actual_dimension = (N == 0) ? 1 : N };
395  typedef typename MultiArrayShape<N>::type shape_type;
396  typedef T value_type;
397  typedef value_type * pointer;
398  typedef value_type & reference;
399  typedef ChunkBase<N, T> Chunk;
400 
401  ChunkedArrayBase()
402  : shape_()
403  , chunk_shape_()
404  {}
405 
406  ChunkedArrayBase(shape_type const & shape, shape_type const & chunk_shape)
407  : shape_(shape)
408  , chunk_shape_(prod(chunk_shape) > 0 ? chunk_shape : detail::ChunkShape<N, T>::defaultShape())
409  {}
410 
411  virtual ~ChunkedArrayBase()
412  {}
413 
414  virtual void unrefChunk(IteratorChunkHandle<N, T> * h) const = 0;
415 
416  virtual pointer chunkForIterator(shape_type const & point,
417  shape_type & strides, shape_type & upper_bound,
418  IteratorChunkHandle<N, T> * h) = 0;
419 
420  virtual pointer chunkForIterator(shape_type const & point,
421  shape_type & strides, shape_type & upper_bound,
422  IteratorChunkHandle<N, T> * h) const = 0;
423 
424  virtual std::string backend() const = 0;
425 
426  virtual shape_type chunkArrayShape() const = 0;
427 
428  virtual bool isReadOnly() const
429  {
430  return false;
431  }
432 
433  MultiArrayIndex size() const
434  {
435  return prod(shape_);
436  }
437 
438  shape_type const & shape() const
439  {
440  return shape_;
441  }
442 
443  MultiArrayIndex shape(MultiArrayIndex d) const
444  {
445  return shape_[d];
446  }
447 
448  shape_type const & chunkShape() const
449  {
450  return chunk_shape_;
451  }
452 
453  MultiArrayIndex chunkShape(MultiArrayIndex d) const
454  {
455  return chunk_shape_[d];
456  }
457 
458  bool isInside(shape_type const & p) const
459  {
460  for(int d=0; d<N; ++d)
461  if(p[d] < 0 || p[d] >= shape_[d])
462  return false;
463  return true;
464  }
465 
466  shape_type shape_, chunk_shape_;
467 };
468 
469 template <unsigned int N, class T>
471 
472 struct ChunkUnrefProxyBase
473 {
474  virtual ~ChunkUnrefProxyBase() {}
475 };
476 
477 template <unsigned int N, class T_MaybeConst>
478 class MultiArrayView<N, T_MaybeConst, ChunkedArrayTag>
479 : public ChunkedArrayBase<N, typename UnqualifiedType<T_MaybeConst>::type>
480 {
481  public:
482  enum ActualDimension { actual_dimension = (N==0) ? 1 : N };
483  typedef typename UnqualifiedType<T_MaybeConst>::type T;
484  typedef T value_type; // FIXME: allow Multiband<T> ???
485  typedef T_MaybeConst & reference;
486  typedef const value_type &const_reference;
487  typedef T_MaybeConst * pointer;
488  typedef const value_type *const_pointer;
489  typedef typename MultiArrayShape<actual_dimension>::type difference_type;
490  typedef difference_type key_type;
491  typedef difference_type size_type;
492  typedef difference_type shape_type;
493  typedef MultiArrayIndex difference_type_1;
494  typedef ChunkIterator<actual_dimension, T_MaybeConst> chunk_iterator;
495  typedef ChunkIterator<actual_dimension, T const> chunk_const_iterator;
500  typedef ChunkedArrayTag StrideTag;
501  typedef ChunkBase<N, T> Chunk;
502 
503  typedef MultiArray<N, Chunk> ChunkHolder;
504 
505  struct UnrefProxy
506  : public ChunkUnrefProxyBase
507  {
508  UnrefProxy(int size, ChunkedArray<N, T> * array)
509  : chunks_(size)
510  , array_(array)
511  {}
512 
513  ~UnrefProxy()
514  {
515  if(array_)
516  array_->unrefChunks(chunks_);
517  }
518 
520  ChunkedArray<N, T> * array_;
521  };
522 
523  virtual shape_type chunkArrayShape() const
524  {
525  return chunks_.shape();
526  }
527 
528  shape_type chunkStart(shape_type const & global_start) const
529  {
530  shape_type chunk_start(SkipInitialization);
531  detail::ChunkIndexing<N>::chunkIndex(global_start, bits_, chunk_start);
532  return chunk_start;
533  }
534 
535  shape_type chunkStop(shape_type global_stop) const
536  {
537  global_stop -= shape_type(1);
538  shape_type chunk_stop(SkipInitialization);
539  detail::ChunkIndexing<N>::chunkIndex(global_stop, bits_, chunk_stop);
540  chunk_stop += shape_type(1);
541  return chunk_stop;
542  }
543 
544  virtual void unrefChunk(IteratorChunkHandle<N, T> *) const {}
545 
546  virtual T* chunkForIterator(shape_type const & point,
547  shape_type & strides, shape_type & upper_bound,
548  IteratorChunkHandle<N, T> * h)
549  {
550  return const_cast<MultiArrayView const *>(this)->chunkForIterator(point, strides, upper_bound, h);
551  }
552 
553  virtual T* chunkForIterator(shape_type const & point,
554  shape_type & strides, shape_type & upper_bound,
555  IteratorChunkHandle<N, T> * h) const
556  {
557  shape_type global_point = point + h->offset_;
558 
559  if(!this->isInside(global_point))
560  {
561  upper_bound = point + this->chunk_shape_;
562  return 0;
563  }
564 
565  global_point += offset_;
566  shape_type coffset = offset_ + h->offset_;
567 
568  shape_type chunkIndex = chunkStart(global_point);
569  Chunk const * chunk = &chunks_[chunkIndex];
570  strides = chunk->strides_;
571  upper_bound = (chunkIndex + shape_type(1)) * this->chunk_shape_ - coffset;
572  std::size_t offset = detail::ChunkIndexing<N>::offsetInChunk(global_point, mask_, strides);
573  return const_cast<T*>(chunk->pointer_ + offset);
574  }
575 
576  virtual std::string backend() const
577  {
578  return "MultiArrayView<ChunkedArrayTag>";
579  }
580 
582  : ChunkedArrayBase<N, T>()
583  {}
584 
585  MultiArrayView(shape_type const & shape, shape_type const & chunk_shape)
586  : ChunkedArrayBase<N, T>(shape, chunk_shape)
587  {}
588 
589  MultiArrayView & operator=(MultiArrayView const & rhs)
590  {
591  if(this != &rhs)
592  {
593  if(!hasData())
594  {
595  ChunkedArrayBase<N, T>::operator=(rhs);
596  chunks_ = rhs.chunks_;
597  offset_ = rhs.offset_;
598  bits_ = rhs.bits_;
599  mask_ = rhs.mask_;
600  unref_ = rhs.unref_;
601  }
602  else
603  {
604  vigra_precondition(this->shape() == rhs.shape(),
605  "MultiArrayView::operator=(): shape mismatch.");
606  iterator i = begin(), ie = end();
607  const_iterator j = rhs.begin();
608  for(; i != ie; ++i, ++j)
609  *i = *j;
610  }
611  }
612  return *this;
613  }
614 
615  #define VIGRA_CHUNKED_ARRAY_VIEW_ASSIGN(op) \
616  template<class U, class C1> \
617  MultiArrayView & operator op(MultiArrayView<N, U, C1> const & rhs) \
618  { \
619  vigra_precondition(this->shape() == rhs.shape(), \
620  "MultiArrayView::operator" #op "(): shape mismatch."); \
621  iterator i = begin(), ie = end(); \
622  typename MultiArrayView<N, U, C1>::const_iterator j = rhs.begin(); \
623  for(; i != ie; ++i, ++j) \
624  *i op detail::RequiresExplicitCast<value_type>::cast(*j); \
625  return *this; \
626  } \
627  \
628  MultiArrayView & operator op(value_type const & v) \
629  { \
630  if(hasData()) \
631  { \
632  iterator i = begin(), ie = end(); \
633  for(; i != ie; ++i) \
634  *i op v; \
635  } \
636  return *this; \
637  }
638 
639  VIGRA_CHUNKED_ARRAY_VIEW_ASSIGN(=)
640  VIGRA_CHUNKED_ARRAY_VIEW_ASSIGN(+=)
641  VIGRA_CHUNKED_ARRAY_VIEW_ASSIGN(-=)
642  VIGRA_CHUNKED_ARRAY_VIEW_ASSIGN(*=)
643  VIGRA_CHUNKED_ARRAY_VIEW_ASSIGN(/=)
644 
645  #undef VIGRA_CHUNKED_ARRAY_VIEW_ASSIGN
646 
647  // template<class Expression>
648  // MultiArrayView & operator=(multi_math::MultiMathOperand<Expression> const & rhs)
649  // {
650  // multi_math::math_detail::assign(*this, rhs);
651  // return *this;
652  // }
653 
654  // /** Add-assignment of an array expression. Fails with
655  // <tt>PreconditionViolation</tt> exception when the shapes do not match.
656  // */
657  // template<class Expression>
658  // MultiArrayView & operator+=(multi_math::MultiMathOperand<Expression> const & rhs)
659  // {
660  // multi_math::math_detail::plusAssign(*this, rhs);
661  // return *this;
662  // }
663 
664  // /** Subtract-assignment of an array expression. Fails with
665  // <tt>PreconditionViolation</tt> exception when the shapes do not match.
666  // */
667  // template<class Expression>
668  // MultiArrayView & operator-=(multi_math::MultiMathOperand<Expression> const & rhs)
669  // {
670  // multi_math::math_detail::minusAssign(*this, rhs);
671  // return *this;
672  // }
673 
674  // /** Multiply-assignment of an array expression. Fails with
675  // <tt>PreconditionViolation</tt> exception when the shapes do not match.
676  // */
677  // template<class Expression>
678  // MultiArrayView & operator*=(multi_math::MultiMathOperand<Expression> const & rhs)
679  // {
680  // multi_math::math_detail::multiplyAssign(*this, rhs);
681  // return *this;
682  // }
683 
684  // /** Divide-assignment of an array expression. Fails with
685  // <tt>PreconditionViolation</tt> exception when the shapes do not match.
686  // */
687  // template<class Expression>
688  // MultiArrayView & operator/=(multi_math::MultiMathOperand<Expression> const & rhs)
689  // {
690  // multi_math::math_detail::divideAssign(*this, rhs);
691  // return *this;
692  // }
693 
694  reference operator[](shape_type point)
695  {
696  VIGRA_ASSERT_INSIDE(point);
697  point += offset_;
698  Chunk * chunk = chunks_.data() +
699  detail::ChunkIndexing<N>::chunkOffset(point, bits_, chunks_.stride());
700  return *(chunk->pointer_ +
701  detail::ChunkIndexing<N>::offsetInChunk(point, mask_, chunk->strides_));
702  }
703 
704  const_reference operator[](shape_type const & point) const
705  {
706  return const_cast<MultiArrayView *>(this)->operator[](point);
707  }
708 
709  template <int M>
711  operator[](const TinyVector<MultiArrayIndex, M> &d) const
712  {
713  return bindInner(d);
714  }
715 
716  reference operator[](difference_type_1 d)
717  {
718  return operator[](scanOrderIndexToCoordinate(d));
719  }
720 
721  const_reference operator[](difference_type_1 d) const
722  {
723  return operator[](scanOrderIndexToCoordinate(d));
724  }
725 
726  difference_type scanOrderIndexToCoordinate(difference_type_1 d) const
727  {
728  difference_type coord(SkipInitialization);
729  detail::ScanOrderToCoordinate<actual_dimension>::exec(d, this->shape_, coord);
730  return coord;
731  }
732 
733  /** convert coordinate to scan-order index.
734  */
735  difference_type_1 coordinateToScanOrderIndex(const difference_type &d) const
736  {
737  return detail::CoordinateToScanOrder<actual_dimension>::exec(this->shape_, d);
738  }
739 
740  // /** 1D array access. Use only if N == 1.
741  // */
742  // reference operator() (difference_type_1 x)
743  // {
744  // VIGRA_ASSERT_INSIDE(difference_type(x));
745  // return m_ptr [detail::CoordinatesToOffest<StrideTag>::exec(m_stride, x)];
746  // }
747 
748  // /** 2D array access. Use only if N == 2.
749  // */
750  // reference operator() (difference_type_1 x, difference_type_1 y)
751  // {
752  // VIGRA_ASSERT_INSIDE(difference_type(x, y));
753  // return m_ptr [detail::CoordinatesToOffest<StrideTag>::exec(m_stride, x, y)];
754  // }
755 
756  // /** 3D array access. Use only if N == 3.
757  // */
758  // reference operator() (difference_type_1 x, difference_type_1 y, difference_type_1 z)
759  // {
760  // VIGRA_ASSERT_INSIDE(difference_type(x, y, z));
761  // return m_ptr [m_stride[0]*x + m_stride[1]*y + m_stride[2]*z];
762  // }
763 
764  // /** 4D array access. Use only if N == 4.
765  // */
766  // reference operator() (difference_type_1 x, difference_type_1 y,
767  // difference_type_1 z, difference_type_1 u)
768  // {
769  // VIGRA_ASSERT_INSIDE(difference_type(x, y, z, u));
770  // return m_ptr [m_stride[0]*x + m_stride[1]*y + m_stride[2]*z + m_stride[3]*u];
771  // }
772 
773  // /** 5D array access. Use only if N == 5.
774  // */
775  // reference operator() (difference_type_1 x, difference_type_1 y, difference_type_1 z,
776  // difference_type_1 u, difference_type_1 v)
777  // {
778  // VIGRA_ASSERT_INSIDE(difference_type(x, y,z, u,v));
779  // return m_ptr [m_stride[0]*x + m_stride[1]*y + m_stride[2]*z + m_stride[3]*u + m_stride[4]*v];
780  // }
781 
782  // /** 1D const array access. Use only if N == 1.
783  // */
784  // const_reference operator() (difference_type_1 x) const
785  // {
786  // VIGRA_ASSERT_INSIDE(difference_type(x));
787  // return m_ptr [detail::CoordinatesToOffest<StrideTag>::exec(m_stride, x)];
788  // }
789 
790  // /** 2D const array access. Use only if N == 2.
791  // */
792  // const_reference operator() (difference_type_1 x, difference_type_1 y) const
793  // {
794  // VIGRA_ASSERT_INSIDE(difference_type(x, y));
795  // return m_ptr [detail::CoordinatesToOffest<StrideTag>::exec(m_stride, x, y)];
796  // }
797 
798  // /** 3D const array access. Use only if N == 3.
799  // */
800  // const_reference operator() (difference_type_1 x, difference_type_1 y, difference_type_1 z) const
801  // {
802  // VIGRA_ASSERT_INSIDE(difference_type(x,y,z));
803  // return m_ptr [m_stride[0]*x + m_stride[1]*y + m_stride[2]*z];
804  // }
805 
806  // /** 4D const array access. Use only if N == 4.
807  // */
808  // const_reference operator() (difference_type_1 x, difference_type_1 y,
809  // difference_type_1 z, difference_type_1 u) const
810  // {
811  // VIGRA_ASSERT_INSIDE(difference_type(x,y,z,u));
812  // return m_ptr [m_stride[0]*x + m_stride[1]*y + m_stride[2]*z + m_stride[3]*u];
813  // }
814 
815  // /** 5D const array access. Use only if N == 5.
816  // */
817  // const_reference operator() (difference_type_1 x, difference_type_1 y, difference_type_1 z,
818  // difference_type_1 u, difference_type_1 v) const
819  // {
820  // VIGRA_ASSERT_INSIDE(difference_type(x,y,z,u,v));
821  // return m_ptr [m_stride[0]*x + m_stride[1]*y + m_stride[2]*z + m_stride[3]*u + m_stride[4]*v];
822  // }
823 
824  template <class U>
825  MultiArrayView & init(const U & init)
826  {
827  return operator=(init);
828  }
829 
830  template <class U, class CN>
831  void copy(const MultiArrayView <N, U, CN>& rhs)
832  {
833  operator=(rhs);
834  }
835 
836  template <class T2, class C2>
837  void swapData(MultiArrayView <N, T2, C2> rhs)
838  {
839  if(this == &rhs)
840  return;
841  vigra_precondition(this->shape() == rhs.shape(),
842  "MultiArrayView::swapData(): shape mismatch.");
843  iterator i = begin(), ie = end();
844  typename MultiArrayView<N, T2, C2>::iterator j = rhs.begin();
845  for(; i != ie; ++i, ++j)
846  std::swap(*i, *j);
847  }
848 
849  bool isUnstrided(unsigned int dimension = N-1) const
850  {
851  if(chunks_.size() > 1)
852  return false;
853  difference_type s = vigra::detail::defaultStride<actual_dimension>(this->shape());
854  for(unsigned int k = 0; k <= dimension; ++k)
855  if(chunks_.data()->strides_[k] != s[k])
856  return false;
857  return true;
858  }
859 
860  MultiArrayView<N-1, value_type, ChunkedArrayTag>
861  bindAt(MultiArrayIndex m, MultiArrayIndex d) const
862  {
863  MultiArrayView<N-1, value_type, ChunkedArrayTag> res(this->shape_.dropIndex(m), this->chunk_shape_.dropIndex(m));
864  res.offset_ = offset_.dropIndex(m);
865  res.bits_ = bits_.dropIndex(m);
866  res.mask_ = mask_.dropIndex(m);
867  res.chunks_.reshape(chunks_.shape().dropIndex(m));
868  res.unref_ = unref_;
869 
870  typedef std::size_t UI;
871  UI start = offset_[m] + d;
872  UI chunk_start = start >> bits_[m];
873  UI startInChunk = start - chunk_start * this->chunk_shape_[m];
874 
875  MultiArrayView<N-1, Chunk> view(chunks_.bindAt(m, chunk_start));
876  MultiCoordinateIterator<N-1> i(view.shape()),
877  end(i.getEndIterator());
878  for(; i != end; ++i)
879  {
880  res.chunks_[*i].pointer_ = view[*i].pointer_ + startInChunk*view[*i].strides_[m];
881  res.chunks_[*i].strides_ = view[*i].strides_.dropIndex(m);
882  }
883 
884  return res;
885  }
886 
887  template <unsigned int M>
888  MultiArrayView <N-1, value_type, ChunkedArrayTag>
889  bind (difference_type_1 d) const
890  {
891  return bindAt(M, d);
892  }
893 
894  MultiArrayView <N-1, value_type, ChunkedArrayTag>
895  bindOuter (difference_type_1 d) const
896  {
897  return bindAt(N-1, d);
898  }
899 
900  template <int M, class Index>
901  MultiArrayView <N-M, value_type, ChunkedArrayTag>
902  bindOuter(const TinyVector <Index, M> &d) const
903  {
904  return bindAt(N-1, d[M-1]).bindOuter(d.dropIndex(M-1));
905  }
906 
907  template <class Index>
908  MultiArrayView <N-1, value_type, ChunkedArrayTag>
909  bindOuter(const TinyVector <Index, 1> &d) const
910  {
911  return bindAt(N-1, d[0]);
912  }
913 
914  MultiArrayView <N-1, value_type, ChunkedArrayTag>
915  bindInner (difference_type_1 d) const
916  {
917  return bindAt(0, d);
918  }
919 
920  template <int M, class Index>
921  MultiArrayView <N-M, value_type, ChunkedArrayTag>
922  bindInner(const TinyVector <Index, M> &d) const
923  {
924  return bindAt(0, d[0]).bindInner(d.dropIndex(0));
925  }
926 
927  template <class Index>
928  MultiArrayView <N-1, value_type, ChunkedArrayTag>
929  bindInner(const TinyVector <Index, 1> &d) const
930  {
931  return bindAt(0, d[0]);
932  }
933 
934  // MultiArrayView <N, typename ExpandElementResult<T>::type, StridedArrayTag>
935  // bindElementChannel(difference_type_1 i) const
936  // {
937  // vigra_precondition(0 <= i && i < ExpandElementResult<T>::size,
938  // "MultiArrayView::bindElementChannel(i): 'i' out of range.");
939  // return expandElements(0).bindInner(i);
940  // }
941 
942  // MultiArrayView <N+1, typename ExpandElementResult<T>::type, StridedArrayTag>
943  // expandElements(difference_type_1 d) const;
944 
945  // MultiArrayView <N+1, T, StrideTag>
946  // insertSingletonDimension (difference_type_1 i) const;
947 
948  // MultiArrayView<N, Multiband<value_type>, StrideTag> multiband() const
949  // {
950  // return MultiArrayView<N, Multiband<value_type>, StrideTag>(*this);
951  // }
952 
953  // MultiArrayView<1, T, StridedArrayTag> diagonal() const
954  // {
955  // return MultiArrayView<1, T, StridedArrayTag>(Shape1(vigra::min(m_shape)),
956  // Shape1(vigra::sum(m_stride)), m_ptr);
957  // }
958 
959  inline void
960  checkSubarrayBounds(shape_type const & start, shape_type const & stop,
961  std::string message) const
962  {
963  message += ": subarray out of bounds.";
964  vigra_precondition(allLessEqual(shape_type(), start) &&
965  allLess(start, stop) &&
966  allLessEqual(stop, this->shape_),
967  message);
968  }
969 
971  subarray(shape_type start, shape_type stop)
972  {
973  checkSubarrayBounds(start, stop, "MultiArrayView<N-1, T, ChunkedArrayTag>::subarray()");
974  start += offset_;
975  stop += offset_;
976  shape_type chunk_start(chunkStart(start));
977 
978  MultiArrayView<N, value_type, ChunkedArrayTag> view(stop-start, this->chunk_shape_);
979  view.chunks_ = chunks_.subarray(chunk_start, chunkStop(stop));
980  view.offset_ = start - chunk_start * this->chunk_shape_;
981  view.bits_ = bits_;
982  view.mask_ = mask_;
983  view.unref_ = unref_;
984  return view;
985  }
986 
987  // /** apply an additional striding to the image, thereby reducing
988  // the shape of the array.
989  // for example, multiplying the stride of dimension one by three
990  // turns an appropriately laid out (interleaved) rgb image into
991  // a single band image.
992  // */
993  // MultiArrayView <N, T, StridedArrayTag>
994  // stridearray (const difference_type &s) const
995  // {
996  // difference_type shape = m_shape;
997  // for (unsigned int i = 0; i < actual_dimension; ++i)
998  // shape [i] /= s [i];
999  // return MultiArrayView <N, T, StridedArrayTag>(shape, m_stride * s, m_ptr);
1000  // }
1001 
1003  transpose () const
1004  {
1005  return transpose(difference_type::linearSequence(N-1, -1));
1006  }
1007 
1009  transpose(const difference_type &permutation) const
1010  {
1012  view(vigra::transpose(this->shape_, permutation), vigra::transpose(this->chunk_shape_, permutation));
1013  view.chunks_ = chunks_.transpose(permutation); // also checks if permutation is valid
1014  view.offset_ = vigra::transpose(offset_, permutation);
1015  view.bits_ = vigra::transpose(bits_, permutation);
1016  view.mask_ = vigra::transpose(mask_, permutation);
1017  view.unref_ = unref_;
1018  typename MultiArray<N, Chunk>::iterator i = view.chunks_.begin(),
1019  iend = view.chunks_.end();
1020  for(; i != iend; ++i)
1021  i->strides_ = vigra::transpose(i->strides_, permutation);
1022  return view;
1023  }
1024 
1025  // MultiArrayView <N, T, StridedArrayTag>
1026  // permuteDimensions (const difference_type &s) const;
1027 
1028  // /** Permute the dimensions of the array so that the strides are in ascending order.
1029  // Determines the appropriate permutation and then calls permuteDimensions().
1030  // */
1031  // MultiArrayView <N, T, StridedArrayTag>
1032  // permuteStridesAscending() const;
1033 
1034  // /** Permute the dimensions of the array so that the strides are in descending order.
1035  // Determines the appropriate permutation and then calls permuteDimensions().
1036  // */
1037  // MultiArrayView <N, T, StridedArrayTag>
1038  // permuteStridesDescending() const;
1039 
1040  // /** Compute the ordering of the strides in this array.
1041  // The result is describes the current permutation of the axes relative
1042  // to the standard ascending stride order.
1043  // */
1044  // difference_type strideOrdering() const
1045  // {
1046  // return strideOrdering(m_stride);
1047  // }
1048 
1049  // /** Compute the ordering of the given strides.
1050  // The result is describes the current permutation of the axes relative
1051  // to the standard ascending stride order.
1052  // */
1053  // static difference_type strideOrdering(difference_type strides);
1054 
1055  template <class U, class C1>
1056  bool operator==(MultiArrayView<N, U, C1> const & rhs) const
1057  {
1058  if(this->shape() != rhs.shape())
1059  return false;
1060  const_iterator i = begin(), ie = end();
1062  for(; i != ie; ++i, ++j)
1063  if(*i != *j)
1064  return false;
1065  return true;
1066  }
1067 
1068  template <class U, class C1>
1069  bool operator!=(MultiArrayView<N, U, C1> const & rhs) const
1070  {
1071  return !operator==(rhs);
1072  }
1073 
1074  // bool all() const
1075  // {
1076  // bool res = true;
1077  // detail::reduceOverMultiArray(traverser_begin(), shape(),
1078  // res,
1079  // detail::AllTrueReduceFunctor(),
1080  // MetaInt<actual_dimension-1>());
1081  // return res;
1082  // }
1083 
1084  // bool any() const
1085  // {
1086  // bool res = false;
1087  // detail::reduceOverMultiArray(traverser_begin(), shape(),
1088  // res,
1089  // detail::AnyTrueReduceFunctor(),
1090  // MetaInt<actual_dimension-1>());
1091  // return res;
1092  // }
1093 
1094  // void minmax(T * minimum, T * maximum) const
1095  // {
1096  // std::pair<T, T> res(NumericTraits<T>::max(), NumericTraits<T>::min());
1097  // detail::reduceOverMultiArray(traverser_begin(), shape(),
1098  // res,
1099  // detail::MinmaxReduceFunctor(),
1100  // MetaInt<actual_dimension-1>());
1101  // *minimum = res.first;
1102  // *maximum = res.second;
1103  // }
1104 
1105  // template <class U>
1106  // void meanVariance(U * mean, U * variance) const
1107  // {
1108  // typedef typename NumericTraits<U>::RealPromote R;
1109  // R zero = R();
1110  // triple<double, R, R> res(0.0, zero, zero);
1111  // detail::reduceOverMultiArray(traverser_begin(), shape(),
1112  // res,
1113  // detail::MeanVarianceReduceFunctor(),
1114  // MetaInt<actual_dimension-1>());
1115  // *mean = res.second;
1116  // *variance = res.third / res.first;
1117  // }
1118 
1119  // template <class U>
1120  // U sum() const
1121  // {
1122  // U res = NumericTraits<U>::zero();
1123  // detail::reduceOverMultiArray(traverser_begin(), shape(),
1124  // res,
1125  // detail::SumReduceFunctor(),
1126  // MetaInt<actual_dimension-1>());
1127  // return res;
1128  // }
1129 
1130  // template <class U, class S>
1131  // void sum(MultiArrayView<N, U, S> sums) const
1132  // {
1133  // transformMultiArray(srcMultiArrayRange(*this),
1134  // destMultiArrayRange(sums),
1135  // FindSum<U>());
1136  // }
1137 
1138  // template <class U>
1139  // U product() const
1140  // {
1141  // U res = NumericTraits<U>::one();
1142  // detail::reduceOverMultiArray(traverser_begin(), shape(),
1143  // res,
1144  // detail::ProdReduceFunctor(),
1145  // MetaInt<actual_dimension-1>());
1146  // return res;
1147  // }
1148 
1149  // typename NormTraits<MultiArrayView>::SquaredNormType
1150  // squaredNorm() const
1151  // {
1152  // typedef typename NormTraits<MultiArrayView>::SquaredNormType SquaredNormType;
1153  // SquaredNormType res = NumericTraits<SquaredNormType>::zero();
1154  // detail::reduceOverMultiArray(traverser_begin(), shape(),
1155  // res,
1156  // detail::SquaredL2NormReduceFunctor(),
1157  // MetaInt<actual_dimension-1>());
1158  // return res;
1159  // }
1160 
1161  // typename NormTraits<MultiArrayView>::NormType
1162  // norm(int type = 2, bool useSquaredNorm = true) const;
1163 
1164  bool hasData () const
1165  {
1166  return chunks_.hasData();
1167  }
1168 
1169  iterator begin()
1170  {
1171  return createCoupledIterator(*this);
1172  }
1173 
1174  iterator end()
1175  {
1176  return begin().getEndIterator();
1177  }
1178 
1179  const_iterator cbegin() const
1180  {
1181  return createCoupledIterator(const_cast<MultiArrayView const &>(*this));
1182  }
1183 
1184  const_iterator cend() const
1185  {
1186  return cbegin().getEndIterator();
1187  }
1188 
1189  const_iterator begin() const
1190  {
1191  return createCoupledIterator(*this);
1192  }
1193 
1194  const_iterator end() const
1195  {
1196  return begin().getEndIterator();
1197  }
1198 
1199  chunk_iterator chunk_begin(shape_type const & start, shape_type const & stop)
1200  {
1201  checkSubarrayBounds(start, stop, "MultiArrayView<N-1, T, ChunkedArrayTag>::chunk_begin()");
1202  return chunk_iterator(this, start, stop, chunkStart(start), chunkStop(stop), this->chunk_shape_);
1203  }
1204 
1205  chunk_iterator chunk_end(shape_type const & start, shape_type const & stop)
1206  {
1207  return chunk_begin(start, stop).getEndIterator();
1208  }
1209 
1210  chunk_const_iterator chunk_begin(shape_type const & start, shape_type const & stop) const
1211  {
1212  checkSubarrayBounds(start, stop, "MultiArrayView<N-1, T, ChunkedArrayTag>::chunk_begin()");
1213  return chunk_const_iterator(this, start, stop, chunkStart(start), chunkStop(stop), this->chunk_shape_);
1214  }
1215 
1216  chunk_const_iterator chunk_end(shape_type const & start, shape_type const & stop) const
1217  {
1218  return chunk_begin(start, stop).getEndIterator();
1219  }
1220 
1221  chunk_const_iterator chunk_cbegin(shape_type const & start, shape_type const & stop) const
1222  {
1223  checkSubarrayBounds(start, stop, "MultiArrayView<N-1, T, ChunkedArrayTag>::chunk_cbegin()");
1224  return chunk_const_iterator(this, start, stop, chunkStart(start), chunkStop(stop), this->chunk_shape_);
1225  }
1226 
1227  chunk_const_iterator chunk_cend(shape_type const & start, shape_type const & stop) const
1228  {
1229  return chunk_cbegin(start, stop).getEndIterator();
1230  }
1231 
1232  view_type view ()
1233  {
1234  return *this;
1235  }
1236 
1237  MultiArray<N, Chunk> chunks_;
1238  shape_type offset_, bits_, mask_;
1239  VIGRA_SHARED_PTR<ChunkUnrefProxyBase> unref_;
1240 };
1241 
1242 template <unsigned int N, class T>
1244 createCoupledIterator(MultiArrayView<N, T, ChunkedArrayTag> & m)
1245 {
1246  typedef typename MultiArrayView<N, T, ChunkedArrayTag>::iterator IteratorType;
1247  typedef typename IteratorType::handle_type P1;
1248  typedef typename P1::base_type P0;
1249 
1250  return IteratorType(P1(m,
1251  P0(m.shape())));
1252 }
1253 
1254 template <unsigned int N, class T>
1256 createCoupledIterator(MultiArrayView<N, T, ChunkedArrayTag> const & m)
1257 {
1258  typedef typename MultiArrayView<N, T, ChunkedArrayTag>::const_iterator IteratorType;
1259  typedef typename IteratorType::handle_type P1;
1260  typedef typename P1::base_type P0;
1261 
1262  return IteratorType(P1(m,
1263  P0(m.shape())));
1264 }
1265 
1266 /** \addtogroup ChunkedArrayClasses Chunked arrays
1267 
1268  Store big data (potentially larger than RAM) as a collection of rectangular blocks.
1269 */
1270 //@{
1271 
1272 /** \brief Option object for \ref ChunkedArray construction.
1273 */
1275 {
1276  public:
1277  /** \brief Initialize options with defaults.
1278  */
1280  : fill_value(0.0)
1281  , cache_max(-1)
1282  , compression_method(DEFAULT_COMPRESSION)
1283  {}
1284 
1285  /** \brief Element value for read-only access of uninitialized chunks.
1286 
1287  Default: 0
1288  */
1290  {
1291  fill_value = v;
1292  return *this;
1293  }
1294 
1295  ChunkedArrayOptions fillValue(double v) const
1296  {
1297  return ChunkedArrayOptions(*this).fillValue(v);
1298  }
1299 
1300  /** \brief Maximum number of chunks in the cache.
1301 
1302  Default: -1 ( = use a heuristic depending on array shape)
1303  */
1305  {
1306  cache_max = v;
1307  return *this;
1308  }
1309 
1310  ChunkedArrayOptions cacheMax(int v) const
1311  {
1312  return ChunkedArrayOptions(*this).cacheMax(v);
1313  }
1314 
1315  /** \brief Compress inactive chunks with the given method.
1316 
1317  Default: DEFAULT_COMPRESSION (depends on backend)
1318  */
1319  ChunkedArrayOptions & compression(CompressionMethod v)
1320  {
1321  compression_method = v;
1322  return *this;
1323  }
1324 
1325  ChunkedArrayOptions compression(CompressionMethod v) const
1326  {
1327  return ChunkedArrayOptions(*this).compression(v);
1328  }
1329 
1330  double fill_value;
1331  int cache_max;
1332  CompressionMethod compression_method;
1333 };
1334 
1335 /** \brief Interface and base class for chunked arrays.
1336 
1337 Very big data arrays (possibly bigger than the available RAM) can
1338 only be processed in smaller pieces. To support quick access to
1339 these pieces, it is advantegeous to store big arrays in chunks,
1340 i.e. as a collection of small rectagular subarrays. The class
1341 ChunkedArray encapsulates storage and handling of these chunks and
1342 provides various APIs to easily access the data.
1343 
1344 <b>\#include</b> <vigra/multi_array_chunked.hxx> <br/>
1345 Namespace: vigra
1346 
1347 @tparam N the array dimension
1348 @tparam T the type of the array elements
1349 
1350 (these are the same as in \ref MultiArrayView). The actual way of chunk storage is determined by the derived class the program uses:
1351 
1352 <ul>
1353  <li>ChunkedArrayFull: Provides the chunked array API for a standard
1354  \ref MultiArray (i.e. there is only one chunk for the entire array).
1355 
1356  <li>ChunkedArrayLazy: All chunks reside in memory, but are only
1357  allocated upon first access.
1358 
1359  <li>ChunkedArrayCompressed: Like ChunkedArrayLazy, but temporarily
1360  unused chunks are compressed in memory to save space.
1361 
1362  <li>ChunkedArrayTmpFile: Chunks are stored in a memory-mapped file.
1363  Temporarily unused chunks are written to the hard-drive and deleted from
1364  memory.
1365 
1366  <li>ChunkedArrayHDF5: Chunks are stored in a HDF5 dataset by means of
1367  HDF5's native chunked storage capabilities. Temporarily unused chunks are
1368  written to the hard-drive in compressed form and deleted from memory.
1369 </ul>
1370 You must use these derived classes to construct a chunked array because
1371 ChunkedArray itself is an abstract class.
1372 
1373 Chunks can be in one of the following states:
1374 <ul>
1375  <li>uninitialized: Chunks are only initialized (i.e. allocated) upon the first
1376  write access. If an uninitialized chunk is accessed in a read-only manner, the
1377  system returns a pseudo-chunk whose elements have a user-provided fill value.
1378 
1379  <li>asleep: The chunk is currently unused and has been compressed and/or
1380  swapped out to the hard drive.
1381 
1382  <li>inactive: The chunk is currently unused, but still resides in memory.
1383 
1384  <li>active: The chunk resides in memory and is currently in use.
1385 
1386  <li>locked: Chunks are briefly in this state during transitions
1387  between the other states (e.g. while loading and/or decompression is
1388  in progress).
1389 
1390  <li>failed: An unexpected error occured, e.g. the system is out of memory
1391  or a write to the hard drive failed.
1392 </ul>
1393 In-memory chunks (active and inactive) are placed in a cache. If a chunk
1394 transitions from the 'asleep' to the 'active' state, it is added to the cache,
1395 and an 'inactive' chunk is removed and sent 'asleep'. If there is no 'inactive'
1396 chunk in the cache, the cache size is temporarily increased. All state
1397 transitions are thread-safe.
1398 
1399 In order to optimize performance, the user should adjust the cache size (via
1400 \ref setCacheMaxSize() or \ref ChunkedArrayOptions) so that it can hold all
1401 chunks that are frequently needed (e.g. all chunks forming a row of the full
1402 array).
1403 
1404 Another performance critical parameter is the chunk shape. While the system
1405 uses sensible defaults (512<sup>2</sup> for 2D arrays, 64<sup>3</sup> for 3D,
1406 64x64x16x4 for 4D, and 64x64x16x4x4 for 5D), the shape may need to be adjusted
1407 via the array's constructor to match the access patterns of the algorithms to
1408 be used. For speed reasons, chunk shapes must be powers of 2.
1409 
1410 The data in the array can be accessed in several ways. The simplest is
1411 via calls to <tt>checkoutSubarray()</tt> and <tt>commitSubarray()</tt>: These
1412 functions copy an arbitrary subregion of a chunked array (possibly straddling
1413 many chunks) into a standard \ref MultiArrayView for processing, and write
1414 results back into the chunked array:
1415 \code
1416  ChunkedArray<3, float> & chunked_array = ...;
1417 
1418  Shape3 roi_start(1000, 500, 500);
1419  MultiArray<3, float> work_array(Shape3(100, 100, 100));
1420 
1421  // copy data from region (1000,500,500)...(1100,600,600)
1422  chunked_array.checkoutSubarray(roi_start, work_array);
1423 
1424  ... // work phase: process data in work_array as usual
1425 
1426  // write results back into chunked_array
1427  chunked_array.commitSubarray(roi_start, work_array);
1428 \endcode
1429 The required chunks in <tt>chunked_array</tt> will only be active while the
1430 checkout and commit calls are executing. During the work phase, other threads
1431 can use the chunked array's cache to checkout or commit different subregions.
1432 
1433 Alternatively, one can work directly on the chunk storage. This is most easily
1434 achieved by means of chunk iterators:
1435 \code
1436  ChunkedArray<3, float> & chunked_array = ...;
1437 
1438  // define the ROI to be processed
1439  Shape3 roi_start(100, 200, 300), roi_end(1000, 2000, 600);
1440 
1441  // get a pair of chunk iterators ( = iterators over chunks)
1442  auto chunk = chunked_array.chunk_begin(roi_start, roi_end),
1443  end = chunked_array.chunk_end(roi_start, roi_end);
1444 
1445  // iterate over the chunks in the ROI
1446  for(; chunk != end; ++chunk)
1447  {
1448  // get a view to the current chunk's data
1449  // Note: The view actually refers to the intersection of the
1450  // current chunk with the ROI. Thus, chunks which are
1451  // partially outside the ROI are appropriately trimmed.
1452  MultiArrayView<3, float> chunk_view = *chunk;
1453 
1454  ... // work phase: process data in chunk_view as usual
1455  }
1456 \endcode
1457 No memory is duplicated in this approach, and only the current chunk needs
1458 to be active, so that a small chunk cache is sufficient. The iteration
1459 over chunks can be distributed over several threads that process the array
1460 data in parallel. The programmer must make sure that write operations to
1461 individual elements are synchronized between threads. This is usually
1462 achieved by ensuring that the threads are responsible for non-overlapping
1463 regions of the output array.
1464 
1465 An even simpler method is direct element access via indexing. However, the
1466 chunked array has no control over the access order in this case, so it must
1467 potentially activate the present chunk upon each access. This is rather
1468 expensive and should only be used for debugging:
1469 \code
1470  ChunkedArray<3, float> & chunked_array = ...;
1471 
1472  Shape3 index(100, 200, 300);
1473  // access data at coordinate 'index'
1474  chunked_array.setItem(index, chunked_array.getItem(index) + 2.0);
1475 \endcode
1476 
1477 Two additional APIs provide access in a way compatible with an ordinary
1478 \ref MultiArrayView. These APIs should be used in functions that are
1479 supposed to work unchanged on both ordinary and chunked arrays. The first
1480 possibility is the chunked scan-order iterator:
1481 \code
1482  ChunkedArray<3, float> & chunked_array = ...;
1483 
1484  // get a pair of scan-order iterators ( = iterators over elements)
1485  auto iter = chunked_array.begin(),
1486  end = chunked_array.end();
1487 
1488  // iterate over all array elements
1489  for(; iter != end; ++iter)
1490  {
1491  // access current element
1492  *iter = *iter + 2.0;
1493  }
1494 \endcode
1495 A new chunk must potentially be activated whenever the iterator crosses
1496 a chunk boundary. Since the overhead of the activation operation can be
1497 amortized over many within-chunk steps, the iteration (excluding the
1498 workload within the loop) takes only twice as long as the iteration over an
1499 unstrided array using an ordinary \ref StridedScanOrderIterator.
1500 
1501 The final possibility is the creation of a MultiArrayView that accesses
1502 an arbitrary ROI directly:
1503 \code
1504  ChunkedArray<3, float> & chunked_array = ...;
1505 
1506  // define the ROI to be processed
1507  Shape3 roi_start(100, 200, 300), roi_end(1000, 2000, 600);
1508 
1509  // create view for ROI
1510  MultiArrayView<3, float, ChunkedArrayTag> view =
1511  chunked_array.subarray(roi_start, roi_stop);
1512 
1513  ... // work phase: process view like any ordinary MultiArrayView
1514 \endcode
1515 Similarly, a lower-dimensional view can be created with one of the
1516 <tt>bind</tt> functions. This approach has the advantage that 'view'
1517 can be passed to any function which is implemented in terms of
1518 MultiArrayViews. However, there are two disadvantages: First, data access
1519 in the view requires two steps (first find the chunk, then find the
1520 appropriate element in the chunk), which causes the chunked view to
1521 be slower than an ordinary MultiArrayView. Second, all chunks intersected
1522 by the view must remain active throughout the view's lifetime, which
1523 may require a big chunk cache and thus keeps many chunks in memory.
1524 */
1525 template <unsigned int N, class T>
1526 class ChunkedArray
1527 : public ChunkedArrayBase<N, T>
1528 {
1529  /*
1530  FIXME:
1531  * backends:
1532  * allocators are not used
1533  * HDF5 only works for scalar types so far
1534  * HDF5 must support read-only and read/write mode
1535  * temp file arrays in swap (just an API addition to the constructor)
1536  * support TIFF chunked reading
1537  * the array implementations should go into cxx files in src/impex
1538  * this requires implementation of the low-level functions independently of dtype
1539  (use 'char *' and multiply shape and stride with sizeof(T))
1540  * don't forget to increment the soversion after the change
1541  * alternative: provide 'config_local.hxx' with flags for available packages
1542  * decide on chunk locking policies for array views (in particular, for index access)
1543  * array view has functions fetch()/release() (better names?) to lock/unlock
1544  _all_ chunks in the view
1545  * release() is automatically called in the destructor
1546  * it should be possible to call fetch in the constructor via a flag,
1547  but should the constructor fetch by default?
1548  * how should fetch() handle the case when the cache is too small
1549  * throw an exception?
1550  * silently enlarge the cache?
1551  * temporarily enlarge the cache?
1552  * provide an option to control the behavior?
1553  * also provide copySubarray() with ReadOnly and ReadWrite flags, where
1554  ReadWrite copies the subarray back in the destructor or on demand
1555  * locking is only required while each slice is copied
1556  * the copy functions can use normal array views and iterators
1557  * the ReadWrite version can store a checksum for each chunk (or part
1558  of a chunk) to detect collisions on write
1559  * use shared pointers to support memory management of the subarrays?
1560  * find efficient ways to support slicing and transposition in the indexing
1561  functions of a view.
1562  1. possibility: each view contains
1563  * an index object 'bound_index_' with original dimension whose values denote
1564  coordinates of bound axes and offsets for unbound coordinates
1565  * a permutation object 'permutation_' with dimension of the view that maps
1566  view coordinates to original coordinates
1567  * that is:
1568  operator[](index)
1569  {
1570  shape_type full_index(bound_index_);
1571  for(int k=0; k<N_view; ++k)
1572  full_index[permutation_[k]] += index[k];
1573  split full_index into chunk part and local part
1574  look up chunk
1575  return pixel
1576  }
1577  * maybe this is faster if it is combined with the stride computation?
1578  * an optimization for unsliced arrays is desirable
1579  2. possibility:
1580  * add the point offset to the low-dimensional index
1581  * split low-dimensional index into chunk part and local part
1582  * look up chunk
1583  * determine scalar pointer offset from local part and strides plus a
1584  chunk-specific correction that can be stored in a 3^N array
1585  - but can we efficiently determine where to look in that offset array?
1586  3. possibility:
1587  * don't care about speed - require copySubarray() if indexing should
1588  be fast
1589  * provide a ChunkIterator that iterates over all chunks in a given ROI and returns a
1590  MultiArrayView for the present chunk (which remains locked in cache until the
1591  iterator is advanced).
1592  * implement proper copy constructors and assignment for all backends
1593  * test HDF5 constructor from existing dataset
1594  * put HDF5 into header of its own
1595  * is the full chunkForIterator() function slow? Check this with a simplified one
1596  in a ChunkedArrayLazy where all chunlks are already implemented, so that
1597  we can simply can skip the check
1598  * add support for Multiband and TinyVector pixels
1599 
1600  */
1601 
1602  public:
1603  typedef ChunkedArrayBase<N, T> base_type;
1604  typedef typename MultiArrayShape<N>::type shape_type;
1605  typedef typename shape_type::value_type difference_type_1;
1606  typedef T value_type;
1607  typedef value_type * pointer;
1608  typedef value_type const * const_pointer;
1609  typedef value_type & reference;
1610  typedef value_type const & const_reference;
1611  typedef ChunkIterator<N, T> chunk_iterator;
1612  typedef ChunkIterator<N, T const> chunk_const_iterator;
1613  typedef StridedScanOrderIterator<N, ChunkedMemory<T>, reference, pointer> iterator;
1614  typedef StridedScanOrderIterator<N, ChunkedMemory<T const>, const_reference, const_pointer> const_iterator;
1615  typedef SharedChunkHandle<N, T> Handle;
1616  typedef ChunkBase<N, T> Chunk;
1617  typedef MultiArrayView<N, T, ChunkedArrayTag> view_type;
1618  typedef MultiArrayView<N, T const, ChunkedArrayTag> const_view_type;
1619  typedef std::queue<Handle*> CacheType;
1620 
1621  static const long chunk_asleep = Handle::chunk_asleep;
1622  static const long chunk_uninitialized = Handle::chunk_uninitialized;
1623  static const long chunk_locked = Handle::chunk_locked;
1624  static const long chunk_failed = Handle::chunk_failed;
1625 
1626  // constructor only called by derived classes (ChunkedArray is abstract)
1627  explicit ChunkedArray(shape_type const & shape,
1628  shape_type const & chunk_shape = shape_type(),
1629  ChunkedArrayOptions const & options = ChunkedArrayOptions())
1630  : ChunkedArrayBase<N, T>(shape, chunk_shape)
1631  , bits_(initBitMask(this->chunk_shape_))
1632  , mask_(this->chunk_shape_ -shape_type(1))
1633  , cache_max_size_(options.cache_max)
1634  , chunk_lock_(new threading::mutex())
1635  , fill_value_(T(options.fill_value))
1636  , fill_scalar_(options.fill_value)
1637  , handle_array_(detail::computeChunkArrayShape(shape, bits_, mask_))
1638  , data_bytes_()
1639  , overhead_bytes_(handle_array_.size()*sizeof(Handle))
1640  {
1641  fill_value_chunk_.pointer_ = &fill_value_;
1642  fill_value_handle_.pointer_ = &fill_value_chunk_;
1643  fill_value_handle_.chunk_state_.store(1);
1644  }
1645 
1646  // compute masks needed for fast index access
1647  static shape_type initBitMask(shape_type const & chunk_shape)
1648  {
1649  shape_type res;
1650  for(unsigned int k=0; k<N; ++k)
1651  {
1652  UInt32 bits = log2i(chunk_shape[k]);
1653  vigra_precondition(chunk_shape[k] == MultiArrayIndex(1 << bits),
1654  "ChunkedArray: chunk_shape elements must be powers of 2.");
1655  res[k] = bits;
1656  }
1657  return res;
1658  }
1659 
1660  virtual ~ChunkedArray()
1661  {
1662  // std::cerr << " final cache size: " << cacheSize() << " (max: " << cacheMaxSize() << ")\n";
1663  }
1664 
1665  /** \brief Number of chunks currently fitting into the cache.
1666  */
1667  int cacheSize() const
1668  {
1669  return cache_.size();
1670  }
1671 
1672  /** \brief Bytes of main memory occupied by the array's data.
1673 
1674  Compressed chunks are only counted with their compressed size.
1675  Chunks swapped out to the hard drive are not counted.
1676  */
1677  std::size_t dataBytes() const
1678  {
1679  return data_bytes_;
1680  }
1681 
1682  /** \brief Bytes of main memory needed to manage the chunked storage.
1683  */
1684  std::size_t overheadBytes() const
1685  {
1686  return overhead_bytes_;
1687  }
1688 
1689  /** \brief Number of chunks along each coordinate direction.
1690  */
1691  virtual shape_type chunkArrayShape() const
1692  {
1693  return handle_array_.shape();
1694  }
1695 
1696  virtual std::size_t dataBytes(Chunk * c) const = 0;
1697 
1698  /** \brief Number of data bytes in an uncompressed chunk.
1699  */
1700  std::size_t dataBytesPerChunk() const
1701  {
1702  return prod(this->chunk_shape_)*sizeof(T);
1703  }
1704 
1705  /** \brief Bytes of main memory needed to manage a single chunk.
1706  */
1707  virtual std::size_t overheadBytesPerChunk() const = 0;
1708 
1709  /** \brief Find the chunk that contains array element 'global_start'.
1710  */
1711  shape_type chunkStart(shape_type const & global_start) const
1712  {
1713  shape_type chunk_start(SkipInitialization);
1714  detail::ChunkIndexing<N>::chunkIndex(global_start, bits_, chunk_start);
1715  return chunk_start;
1716  }
1717 
1718  /** \brief Find the chunk that is beyond array element 'global_stop'.
1719 
1720  Specifically, this computes
1721  \code
1722  chunkStart(global_stop - shape_type(1)) + shape_type(1)
1723  \endcode
1724  */
1725  shape_type chunkStop(shape_type global_stop) const
1726  {
1727  global_stop -= shape_type(1);
1728  shape_type chunk_stop(SkipInitialization);
1729  detail::ChunkIndexing<N>::chunkIndex(global_stop, bits_, chunk_stop);
1730  chunk_stop += shape_type(1);
1731  return chunk_stop;
1732  }
1733 
1734  /** \brief Find the shape of the chunk indexed by 'chunk_index'.
1735 
1736  This may differ from the global chunk shape because chunks at the
1737  right/lower border of the array may be smaller than usual.
1738  */
1739  shape_type chunkShape(shape_type const & chunk_index) const
1740  {
1741  return min(this->chunk_shape_,
1742  this->shape_ - chunk_index*this->chunk_shape_);
1743  }
1744 
1745  using base_type::chunkShape;
1746 
1747 #ifdef DOXYGEN
1748  /** \brief Return the global chunk shape.
1749 
1750  This is the shape of all chunks that are completely contained
1751  in the array's domain.
1752  */
1753  shape_type const & chunkShape() const;
1754 
1755  /** \brief Return the shape in this array.
1756  */
1757  shape_type const & shape() const;
1758 
1759  /** \brief Return the number of elements in this array.
1760  */
1761  MultiArrayIndex size() const;
1762 
1763  /** \brief Check if the given point is in the array domain.
1764  */
1765  bool isInside(shape_type const & p) const;
1766 
1767  /** \brief Return the class that implements this ChunkedArray.
1768  */
1769  std::string backend() const;
1770 
1771 #endif
1772 
1773  inline void
1774  checkSubarrayBounds(shape_type const & start, shape_type const & stop,
1775  std::string message) const
1776  {
1777  message += ": subarray out of bounds.";
1778  vigra_precondition(allLessEqual(shape_type(), start) &&
1779  allLess(start, stop) &&
1780  allLessEqual(stop, this->shape_),
1781  message);
1782  }
1783 
1784  /** \brief Check if two arrays are elementwise equal.
1785  */
1786  template <class U, class C1>
1787  bool operator==(MultiArrayView<N, U, C1> const & rhs) const
1788  {
1789  if(this->shape() != rhs.shape())
1790  return false;
1791  const_iterator i = begin(), ie = end();
1793  for(; i != ie; ++i, ++j)
1794  if(*i != *j)
1795  return false;
1796  return true;
1797  }
1798 
1799  /** \brief Check if two arrays differ in at least one element.
1800  */
1801  template <class U, class C1>
1802  bool operator!=(MultiArrayView<N, U, C1> const & rhs) const
1803  {
1804  return !operator==(rhs);
1805  }
1806 
1807  // internal function to activate a chunk
1808  virtual pointer loadChunk(Chunk ** chunk, shape_type const & chunk_index) = 0;
1809 
1810  // internal function to send a chunk asleep or delete it
1811  // entirely (when destroy = true).
1812  // returns true if the chunk was deleted, false otherwise
1813  virtual bool unloadHandle(Handle * handle, bool destroy = false)
1814  {
1815  if(handle == &fill_value_handle_)
1816  return false;
1817  return unloadChunk(handle->pointer_, destroy);
1818  }
1819 
1820  virtual bool unloadChunk(Chunk * chunk, bool destroy = false) = 0;
1821 
1822  Handle * lookupHandle(shape_type const & index)
1823  {
1824  return &handle_array_[index];
1825  }
1826 
1827  // Decrease the reference counter of the given chunk.
1828  // Will inactivate the chunk when reference counter reaches zero.
1829  virtual void unrefChunk(IteratorChunkHandle<N, T> * h) const
1830  {
1831  unrefChunk(h->chunk_);
1832  h->chunk_ = 0;
1833  }
1834 
1835  // Likewise
1836  void unrefChunk(Handle * chunk) const
1837  {
1838  if(chunk)
1839  {
1840  long rc = chunk->chunk_state_.fetch_sub(1);
1841  #ifdef VIGRA_CHECK_BOUNDS
1842  vigra_invariant(rc >= 0,
1843  "ChunkedArray::unrefChunk(): chunk refcount got negative!");
1844  #endif
1845  }
1846  }
1847 
1848  // Decrease the reference counter of several chunks simultaneously.
1849  void unrefChunks(ArrayVector<Handle*> const & chunks)
1850  {
1851  for(unsigned int k=0; k<chunks.size(); ++k)
1852  unrefChunk(chunks[k]);
1853 
1854  if(cacheMaxSize() > 0)
1855  {
1856  threading::lock_guard<threading::mutex> guard(*chunk_lock_);
1857  cleanCache(cache_.size());
1858  }
1859  }
1860 
1861  // Increase the reference counter of the given chunk.
1862  // If the chunk was asleep, the function first awakens it.
1863  long acquireRef(Handle * handle) const
1864  {
1865  // Obtain a reference to the current chunk handle.
1866  // We use a simple spin-lock here because it is very fast in case of success,
1867  // and failures (i.e. collisions with another thread) are presumably
1868  // very rare.
1869  //
1870  // the function returns the old value of chunk_state_
1871  long rc = handle->chunk_state_.load(threading::memory_order_acquire);
1872  while(true)
1873  {
1874  if(rc >= 0)
1875  {
1876  if(handle->chunk_state_.compare_exchange_weak(rc, rc+1, threading::memory_order_seq_cst))
1877  {
1878  return rc;
1879  }
1880  }
1881  else
1882  {
1883  if(rc == chunk_failed)
1884  {
1885  vigra_precondition(false,
1886  "ChunkedArray::acquireRef() attempt to access failed chunk.");
1887  }
1888  else if(rc == chunk_locked)
1889  {
1890  // cache management in progress => try again later
1891  threading::this_thread::yield();
1892  rc = handle->chunk_state_.load(threading::memory_order_acquire);
1893  }
1894  else if(handle->chunk_state_.compare_exchange_weak(rc, chunk_locked, threading::memory_order_seq_cst))
1895  {
1896  return rc;
1897  }
1898  }
1899  }
1900  }
1901 
1902  pointer
1903  getChunk(Handle * handle, bool isConst, bool insertInCache, shape_type const & chunk_index) const
1904  {
1905  ChunkedArray * self = const_cast<ChunkedArray *>(this);
1906 
1907  long rc = acquireRef(handle);
1908  if(rc >= 0)
1909  return handle->pointer_->pointer_;
1910 
1911  threading::lock_guard<threading::mutex> guard(*chunk_lock_);
1912  try
1913  {
1914  T * p = self->loadChunk(&handle->pointer_, chunk_index);
1915  Chunk * chunk = handle->pointer_;
1916  if(!isConst && rc == chunk_uninitialized)
1917  std::fill(p, p + prod(chunkShape(chunk_index)), this->fill_value_);
1918 
1919  self->data_bytes_ += dataBytes(chunk);
1920 
1921  if(cacheMaxSize() > 0 && insertInCache)
1922  {
1923  // insert in queue of mapped chunks
1924  self->cache_.push(handle);
1925 
1926  // do cache management if cache is full
1927  // (note that we still hold the chunk_lock_)
1928  self->cleanCache(2);
1929  }
1930  handle->chunk_state_.store(1, threading::memory_order_release);
1931  return p;
1932  }
1933  catch(...)
1934  {
1935  handle->chunk_state_.store(chunk_failed);
1936  throw;
1937  }
1938  }
1939 
1940  // helper function for chunkForIterator()
1941  inline pointer
1942  chunkForIteratorImpl(shape_type const & point,
1943  shape_type & strides, shape_type & upper_bound,
1944  IteratorChunkHandle<N, T> * h,
1945  bool isConst) const
1946  {
1947  ChunkedArray * self = const_cast<ChunkedArray *>(this);
1948 
1949  unrefChunk(h->chunk_);
1950  h->chunk_ = 0;
1951 
1952  shape_type global_point = point + h->offset_;
1953 
1954  if(!this->isInside(global_point))
1955  {
1956  upper_bound = point + this->chunk_shape_;
1957  return 0;
1958  }
1959 
1960  shape_type chunkIndex(chunkStart(global_point));
1961 
1962  bool insertInCache = true;
1963  Handle * handle = self->lookupHandle(chunkIndex);
1964  if(isConst && handle->chunk_state_.load() == chunk_uninitialized)
1965  {
1966  handle = &self->fill_value_handle_;
1967  insertInCache = false;
1968  }
1969 
1970  pointer p = getChunk(handle, isConst, insertInCache, chunkIndex);
1971  strides = handle->strides();
1972  upper_bound = (chunkIndex + shape_type(1)) * this->chunk_shape_ - h->offset_;
1973  std::size_t offset = detail::ChunkIndexing<N>::offsetInChunk(global_point, mask_, strides);
1974  h->chunk_ = handle;
1975  return p + offset;
1976  }
1977 
1978  // called by chunked scan-order iterator to obtain the new data pointer
1979  // when the iterator enters a new chunk
1980  virtual pointer chunkForIterator(shape_type const & point,
1981  shape_type & strides, shape_type & upper_bound,
1982  IteratorChunkHandle<N, T> * h)
1983  {
1984  return chunkForIteratorImpl(point, strides, upper_bound, h, false);
1985  }
1986 
1987  virtual pointer chunkForIterator(shape_type const & point,
1988  shape_type & strides, shape_type & upper_bound,
1989  IteratorChunkHandle<N, T> * h) const
1990  {
1991  return chunkForIteratorImpl(point, strides, upper_bound, h, true);
1992  }
1993 
1994  // NOTE: This function must only be called while we hold the chunk_lock_.
1995  // This implies refcount != chunk_locked, so that race conditions are avoided.
1996  long releaseChunk(Handle * handle, bool destroy = false)
1997  {
1998  long rc = 0;
1999  bool mayUnload = handle->chunk_state_.compare_exchange_strong(rc, chunk_locked);
2000  if(!mayUnload && destroy)
2001  {
2002  rc = chunk_asleep;
2003  mayUnload = handle->chunk_state_.compare_exchange_strong(rc, chunk_locked);
2004  }
2005  if(mayUnload)
2006  {
2007  // refcount was zero or chunk_asleep => can unload
2008  try
2009  {
2010  vigra_invariant(handle != &fill_value_handle_,
2011  "ChunkedArray::releaseChunk(): attempt to release fill_value_handle_.");
2012  Chunk * chunk = handle->pointer_;
2013  this->data_bytes_ -= dataBytes(chunk);
2014  int didDestroy = unloadChunk(chunk, destroy);
2015  this->data_bytes_ += dataBytes(chunk);
2016  if(didDestroy)
2017  handle->chunk_state_.store(chunk_uninitialized);
2018  else
2019  handle->chunk_state_.store(chunk_asleep);
2020  }
2021  catch(...)
2022  {
2023  handle->chunk_state_.store(chunk_failed);
2024  throw;
2025  }
2026  }
2027  return rc;
2028  }
2029 
2030  // NOTE: this function must only be called while we hold the chunk_lock_
2031  void cleanCache(int how_many = -1)
2032  {
2033  if(how_many == -1)
2034  how_many = cache_.size();
2035  for(; cache_.size() > cacheMaxSize() && how_many > 0; --how_many)
2036  {
2037  Handle * handle = cache_.front();
2038  cache_.pop();
2039  long rc = releaseChunk(handle);
2040  if(rc > 0) // refcount was positive => chunk is still needed
2041  cache_.push(handle);
2042  }
2043  }
2044 
2045  /** Sends all chunks asleep which are completely inside the given ROI.
2046  If destroy == true and the backend supports destruction (currently:
2047  ChunkedArrayLazy and ChunkedArrayCompressed), chunks will be deleted
2048  entirely. The chunk's contents after releaseChunks() are undefined.
2049  Currently, chunks retain their values when sent asleep, and assume the
2050  array's fill_value when deleted, but applications should not rely on this
2051  behavior.
2052  */
2053  void releaseChunks(shape_type const & start, shape_type const & stop, bool destroy = false)
2054  {
2055  checkSubarrayBounds(start, stop, "ChunkedArray::releaseChunks()");
2056 
2057  MultiCoordinateIterator<N> i(chunkStart(start), chunkStop(stop)),
2058  end(i.getEndIterator());
2059  for(; i != end; ++i)
2060  {
2061  shape_type chunkOffset = *i * this->chunk_shape_;
2062  if(!allLessEqual(start, chunkOffset) ||
2063  !allLessEqual(min(chunkOffset+this->chunk_shape_, this->shape()), stop))
2064  {
2065  // chunk is only partially covered by the ROI
2066  continue;
2067  }
2068 
2069  Handle * handle = this->lookupHandle(*i);
2070  threading::lock_guard<threading::mutex> guard(*chunk_lock_);
2071  releaseChunk(handle, destroy);
2072  }
2073 
2074  // remove all chunks from the cache that are asleep or unitialized
2075  threading::lock_guard<threading::mutex> guard(*chunk_lock_);
2076  int cache_size = cache_.size();
2077  for(int k=0; k < cache_size; ++k)
2078  {
2079  Handle * handle = cache_.front();
2080  cache_.pop();
2081  if(handle->chunk_state_.load() >= 0)
2082  cache_.push(handle);
2083  }
2084  }
2085 
2086  /** \brief Copy an ROI of the chunked array into an ordinary MultiArrayView.
2087 
2088  The ROI's lower bound is given by 'start', its upper bound (in 'beyond' sense)
2089  is 'start + subarray.shape()'. Chunks in the ROI are only activated while
2090  the read is in progress.
2091  */
2092  template <class U, class Stride>
2093  void
2094  checkoutSubarray(shape_type const & start,
2095  MultiArrayView<N, U, Stride> & subarray) const
2096  {
2097  shape_type stop = start + subarray.shape();
2098 
2099  checkSubarrayBounds(start, stop, "ChunkedArray::checkoutSubarray()");
2100 
2101  chunk_const_iterator i = chunk_cbegin(start, stop);
2102  for(; i.isValid(); ++i)
2103  {
2104  subarray.subarray(i.chunkStart()-start, i.chunkStop()-start) = *i;
2105  }
2106  }
2107 
2108  /** \brief Copy an ordinary MultiArrayView into an ROI of the chunked array.
2109 
2110  The ROI's lower bound is given by 'start', its upper bound (in 'beyond' sense)
2111  is 'start + subarray.shape()'. Chunks in the ROI are only activated while
2112  the write is in progress.
2113  */
2114  template <class U, class Stride>
2115  void
2116  commitSubarray(shape_type const & start,
2117  MultiArrayView<N, U, Stride> const & subarray)
2118  {
2119  shape_type stop = start + subarray.shape();
2120 
2121  vigra_precondition(!this->isReadOnly(),
2122  "ChunkedArray::commitSubarray(): array is read-only.");
2123  checkSubarrayBounds(start, stop, "ChunkedArray::commitSubarray()");
2124 
2125  chunk_iterator i = chunk_begin(start, stop);
2126  for(; i.isValid(); ++i)
2127  {
2128  *i = subarray.subarray(i.chunkStart()-start, i.chunkStop()-start);
2129  }
2130  }
2131 
2132  // helper function for subarray()
2133  template <class View>
2134  void subarrayImpl(shape_type const & start, shape_type const & stop,
2135  View & view,
2136  bool isConst) const
2137  {
2138  vigra_precondition(isConst || !this->isReadOnly(),
2139  "ChunkedArray::subarray(): array is read-only.");
2140  checkSubarrayBounds(start, stop, "ChunkedArray::subarray()");
2141  shape_type chunk_start(chunkStart(start)), chunk_stop(chunkStop(stop));
2142 
2143  view.shape_ = stop-start;
2144  view.chunk_shape_ = this->chunk_shape_;
2145  view.chunks_.reshape(chunk_stop-chunk_start);
2146  view.offset_ = start - chunk_start * this->chunk_shape_;
2147  view.bits_ = bits_;
2148  view.mask_ = mask_;
2149 
2150  typedef typename View::UnrefProxy Unref;
2151  ChunkedArray* self = const_cast<ChunkedArray*>(this);
2152  Unref * unref = new Unref(view.chunks_.size(), self);
2153  view.unref_ = VIGRA_SHARED_PTR<Unref>(unref);
2154 
2155  MultiCoordinateIterator<N> i(chunk_start, chunk_stop),
2156  end(i.getEndIterator());
2157  for(; i != end; ++i)
2158  {
2159  Handle * handle = self->lookupHandle(*i);
2160 
2161  if(isConst && handle->chunk_state_.load() == chunk_uninitialized)
2162  handle = &self->fill_value_handle_;
2163 
2164  // This potentially acquires the chunk_lock_ in each iteration.
2165  // Would it be better to acquire it once before the loop?
2166  pointer p = getChunk(handle, isConst, true, *i);
2167 
2168  ChunkBase<N, T> * mini_chunk = &view.chunks_[*i - chunk_start];
2169  mini_chunk->pointer_ = p;
2170  mini_chunk->strides_ = handle->strides();
2171  unref->chunks_[i.scanOrderIndex()] = handle;
2172  }
2173  }
2174 
2175  /** \brief Create a view to the specified ROI.
2176 
2177  The view can be used like an ordinary \ref MultiArrayView, but is
2178  a but slower. All chunks intersecting the view remain active
2179  throughout the view's lifetime.
2180  */
2181  view_type
2182  subarray(shape_type const & start, shape_type const & stop)
2183  {
2184  view_type view;
2185  subarrayImpl(start, stop, view, false);
2186  return view;
2187  }
2188 
2189  /** \brief Create a read-only view to the specified ROI.
2190 
2191  The view can be used like an ordinary \ref MultiArrayView, but is
2192  a but slower. All chunks intersecting the view remain active
2193  throughout the view's lifetime.
2194  */
2195  const_view_type
2196  subarray(shape_type const & start, shape_type const & stop) const
2197  {
2198  const_view_type view;
2199  subarrayImpl(start, stop, view, true);
2200  return view;
2201  }
2202 
2203  /** \brief Create a read-only view to the specified ROI.
2204 
2205  The view can be used like an ordinary \ref MultiArrayView, but is
2206  a but slower. All chunks intersecting the view remain active
2207  throughout the view's lifetime.
2208  */
2209  const_view_type
2210  const_subarray(shape_type const & start, shape_type const & stop) const
2211  {
2212  const_view_type view;
2213  subarrayImpl(start, stop, view, true);
2214  return view;
2215  }
2216 
2217  /** \brief Read the array element at index 'point'.
2218 
2219  Since the corresponding chunk must potentially be activated
2220  first, this function may be slow and should mainly be used in
2221  debugging.
2222  */
2223  value_type getItem(shape_type const & point) const
2224  {
2225  vigra_precondition(this->isInside(point),
2226  "ChunkedArray::getItem(): index out of bounds.");
2227 
2228  ChunkedArray * self = const_cast<ChunkedArray*>(this);
2229  shape_type chunk_index(chunkStart(point));
2230  Handle * handle = self->lookupHandle(chunk_index);
2231  if(handle->chunk_state_.load() == chunk_uninitialized)
2232  return fill_value_;
2233  pointer p = self->getChunk(handle, true, false, chunk_index);
2234  value_type res = *(p +
2235  detail::ChunkIndexing<N>::offsetInChunk(point, mask_, handle->strides()));
2236  self->unrefChunk(handle);
2237  return res;
2238  }
2239 
2240  /** \brief Write the array element at index 'point'.
2241 
2242  Since the corresponding chunk must potentially be activated
2243  first, this function may be slow and should mainly be used in
2244  debugging.
2245  */
2246  void setItem(shape_type const & point, value_type const & v)
2247  {
2248  vigra_precondition(!this->isReadOnly(),
2249  "ChunkedArray::setItem(): array is read-only.");
2250  vigra_precondition(this->isInside(point),
2251  "ChunkedArray::setItem(): index out of bounds.");
2252 
2253  shape_type chunk_index(chunkStart(point));
2254  Handle * handle = lookupHandle(chunk_index);
2255  pointer p = getChunk(handle, false, false, chunk_index);
2256  *(p + detail::ChunkIndexing<N>::offsetInChunk(point, mask_, handle->strides())) = v;
2257  unrefChunk(handle);
2258  }
2259 
2260  /** \brief Create a lower dimensional view to the chunked array.
2261 
2262  Dimension 'dim' is bound at 'index', all other dimensions remain
2263  unchanged. All chunks intersecting the view remain active
2264  throughout the view's lifetime.
2265  */
2268  {
2269  shape_type start, stop(this->shape());
2270  start[dim] = index;
2271  stop[dim] = index+1;
2272  return subarray(start, stop).bindAt(dim, 0);
2273  }
2274 
2275  /** \brief Create a lower dimensional view to the chunked array.
2276 
2277  Dimension 'M' (given as a template parameter) is bound at 'index',
2278  all other dimensions remain unchanged. All chunks intersecting the
2279  view remain active throughout the view's lifetime.
2280  */
2281  template <unsigned int M>
2283  bind (difference_type_1 index) const
2284  {
2285  return bindAt(M, index);
2286  }
2287 
2288  /** \brief Create a lower dimensional view to the chunked array.
2289 
2290  Dimension 'N-1' is bound at 'index', all other dimensions remain
2291  unchanged. All chunks intersecting the view remain active
2292  throughout the view's lifetime.
2293  */
2295  bindOuter (difference_type_1 index) const
2296  {
2297  return bindAt(N-1, index);
2298  }
2299 
2300  /** \brief Create a lower dimensional view to the chunked array.
2301 
2302  The M rightmost dimensions are bound to the indices given in 'd'.
2303  All chunks intersecting the view remain active throughout the view's lifetime.
2304  */
2305  template <int M, class Index>
2308  {
2309  return bindAt(N-1, d[M-1]).bindOuter(d.dropIndex(M-1));
2310  }
2311 
2312  // terminate the recursion of the above function
2313  template <class Index>
2315  bindOuter(const TinyVector <Index, 1> & d) const
2316  {
2317  return bindAt(N-1, d[0]);
2318  }
2319 
2320  /** \brief Create a lower dimensional view to the chunked array.
2321 
2322  Dimension '0' is bound at 'index', all other dimensions remain
2323  unchanged. All chunks intersecting the view remain active
2324  throughout the view's lifetime.
2325  */
2327  bindInner (difference_type_1 index) const
2328  {
2329  return bindAt(0, index);
2330  }
2331 
2332  /** \brief Create a lower dimensional view to the chunked array.
2333 
2334  The M leftmost dimensions are bound to the indices given in 'd'.
2335  All chunks intersecting the view remain active throughout the view's lifetime.
2336  */
2337  template <int M, class Index>
2340  {
2341  return bindAt(0, d[0]).bindInner(d.dropIndex(0));
2342  }
2343 
2344  // terminate the recursion of the above function
2345  template <class Index>
2347  bindInner(const TinyVector <Index, 1> & d) const
2348  {
2349  return bindAt(0, d[0]);
2350  }
2351 
2352  /** \brief Get the number of chunks the cache will hold.
2353 
2354  If there are any inactive chunks in the cache, these will be
2355  sent asleep until the max cahce size is reached. The max cache
2356  size may be temporarily overridden when more chunks need
2357  to be active simultaneously.
2358  */
2359  std::size_t cacheMaxSize() const
2360  {
2361  if(cache_max_size_ < 0)
2362  const_cast<int &>(cache_max_size_) = detail::defaultCacheSize(this->chunkArrayShape());
2363  return cache_max_size_;
2364  }
2365 
2366  /** \brief Set the number of chunks the cache will hold.
2367 
2368  This should be big enough to hold all chunks that are frequently needed
2369  and must therefore be adopted to the application's access pattern.
2370  */
2371  void setCacheMaxSize(std::size_t c)
2372  {
2373  cache_max_size_ = c;
2374  if(c < cache_.size())
2375  {
2376  threading::lock_guard<threading::mutex> guard(*chunk_lock_);
2377  cleanCache();
2378  }
2379  }
2380 
2381  /** \brief Create a scan-order iterator for the entire chunked array.
2382  */
2383  iterator begin()
2384  {
2385  return createCoupledIterator(*this);
2386  }
2387 
2388  /** \brief Create the end iterator for scan-order iteration over
2389  the entire chunked array.
2390  */
2391  iterator end()
2392  {
2393  return begin().getEndIterator();
2394  }
2395 
2396  /** \brief Create a read-only scan-order iterator for the entire
2397  chunked array.
2398  */
2399  const_iterator cbegin() const
2400  {
2401  return createCoupledIterator(const_cast<ChunkedArray const &>(*this));
2402  }
2403 
2404  /** \brief Create the end iterator for read-only scan-order iteration over
2405  the entire chunked array.
2406  */
2407  const_iterator cend() const
2408  {
2409  return cbegin().getEndIterator();
2410  }
2411 
2412  /** \brief Create a read-only scan-order iterator for the entire
2413  chunked array.
2414  */
2415  const_iterator begin() const
2416  {
2417  return createCoupledIterator(*this);
2418  }
2419 
2420  /** \brief Create the end iterator for read-only scan-order iteration over
2421  the entire chunked array.
2422  */
2423  const_iterator end() const
2424  {
2425  return begin().getEndIterator();
2426  }
2427 
2428  /** \brief Create an iterator over all chunks intersected by the given ROI.
2429  */
2430  chunk_iterator chunk_begin(shape_type const & start, shape_type const & stop)
2431  {
2432  checkSubarrayBounds(start, stop, "ChunkedArray::chunk_begin()");
2433  return chunk_iterator(this, start, stop, chunkStart(start), chunkStop(stop), this->chunk_shape_);
2434  }
2435 
2436  /** \brief Create the end iterator for iteration over all chunks
2437  intersected by the given ROI.
2438  */
2439  chunk_iterator chunk_end(shape_type const & start, shape_type const & stop)
2440  {
2441  return chunk_begin(start, stop).getEndIterator();
2442  }
2443 
2444  /** \brief Create a read-only iterator over all chunks intersected
2445  by the given ROI.
2446  */
2447  chunk_const_iterator chunk_begin(shape_type const & start, shape_type const & stop) const
2448  {
2449  checkSubarrayBounds(start, stop, "ChunkedArray::chunk_begin()");
2450  return chunk_const_iterator(this, start, stop, chunkStart(start), chunkStop(stop), this->chunk_shape_);
2451  }
2452 
2453  /** \brief Create the end iterator for read-only iteration over all chunks
2454  intersected by the given ROI.
2455  */
2456  chunk_const_iterator chunk_end(shape_type const & start, shape_type const & stop) const
2457  {
2458  return chunk_begin(start, stop).getEndIterator();
2459  }
2460 
2461  /** \brief Create a read-only iterator over all chunks intersected
2462  by the given ROI.
2463  */
2464  chunk_const_iterator chunk_cbegin(shape_type const & start, shape_type const & stop) const
2465  {
2466  checkSubarrayBounds(start, stop, "ChunkedArray::chunk_cbegin()");
2467  return chunk_const_iterator(this, start, stop, chunkStart(start), chunkStop(stop), this->chunk_shape_);
2468  }
2469 
2470  /** \brief Create the end iterator for read-only iteration over all chunks
2471  intersected by the given ROI.
2472  */
2473  chunk_const_iterator chunk_cend(shape_type const & start, shape_type const & stop) const
2474  {
2475  return chunk_cbegin(start, stop).getEndIterator();
2476  }
2477 
2478  shape_type bits_, mask_;
2479  int cache_max_size_;
2480  VIGRA_SHARED_PTR<threading::mutex> chunk_lock_;
2481  CacheType cache_;
2482  Chunk fill_value_chunk_;
2483  Handle fill_value_handle_;
2484  value_type fill_value_;
2485  double fill_scalar_;
2486  MultiArray<N, Handle> handle_array_;
2487  std::size_t data_bytes_, overhead_bytes_;
2488 };
2489 
2490 /** Returns a CoupledScanOrderIterator to simultaneously iterate over image m1 and its coordinates.
2491  */
2492 template <unsigned int N, class T>
2494 createCoupledIterator(ChunkedArray<N, T> & m)
2495 {
2496  typedef typename ChunkedArray<N, T>::iterator IteratorType;
2497  typedef typename IteratorType::handle_type P1;
2498  typedef typename P1::base_type P0;
2499 
2500  return IteratorType(P1(m,
2501  P0(m.shape())));
2502 }
2503 
2504 template <unsigned int N, class T>
2506 createCoupledIterator(ChunkedArray<N, T> const & m)
2507 {
2508  typedef typename ChunkedArray<N, T>::const_iterator IteratorType;
2509  typedef typename IteratorType::handle_type P1;
2510  typedef typename P1::base_type P0;
2511 
2512  return IteratorType(P1(m,
2513  P0(m.shape())));
2514 }
2515 
2516 /** Implement ChunkedArray as an ordinary MultiArray with a single chunk.
2517 
2518  <b>\#include</b> <vigra/multi_array_chunked.hxx> <br/>
2519  Namespace: vigra
2520 */
2521 template <unsigned int N, class T, class Alloc = std::allocator<T> >
2523 : public ChunkedArray<N, T>,
2524  public MultiArray<N, T, Alloc>
2525 {
2526  public:
2527 
2529  typedef typename Storage::value_type value_type;
2530  typedef typename Storage::pointer pointer;
2531  typedef typename Storage::const_pointer const_pointer;
2532  typedef typename Storage::reference reference;
2533  typedef typename Storage::const_reference const_reference;
2534  typedef typename Storage::difference_type difference_type;
2535  typedef typename Storage::difference_type shape_type;
2536  typedef typename Storage::key_type key_type;
2537  typedef typename Storage::size_type size_type;
2538  typedef typename Storage::difference_type_1 difference_type_1;
2539  typedef typename Storage::iterator iterator;
2540  typedef typename Storage::const_iterator const_iterator;
2541  typedef typename Storage::view_type view_type;
2542 
2543  typedef typename ChunkedArray<N, T>::Chunk Chunk;
2544 
2545  static shape_type computeChunkShape(shape_type s)
2546  {
2547  for(int k=0; k<N; ++k)
2548  s[k] = ceilPower2(s[k]);
2549  return s;
2550  }
2551 
2552  using Storage::subarray;
2553  using Storage::bindOuter;
2554  using Storage::bindInner;
2555  using Storage::bind;
2556  using Storage::bindAt;
2557  using Storage::isInside;
2558  using Storage::shape;
2559  using Storage::size;
2560  using Storage::begin;
2561  using Storage::end;
2562 
2563 #ifndef DOXYGEN // doxygen doesn't understand this
2564  using Storage::operator==;
2565  using Storage::operator!=;
2566 #endif
2567 
2568  /** \brief Construct with given 'shape' and 'options', using the allocator
2569  'alloc' to manage the memory.
2570  */
2571  explicit ChunkedArrayFull(shape_type const & shape,
2572  ChunkedArrayOptions const & options = ChunkedArrayOptions(),
2573  Alloc const & alloc = Alloc())
2574  : ChunkedArray<N, T>(shape, computeChunkShape(shape), options.cacheMax(0)),
2575  Storage(shape, this->fill_value_, alloc),
2576  upper_bound_(shape),
2577  chunk_(detail::defaultStride(shape), this->data())
2578  {
2579  this->handle_array_[0].pointer_ = &chunk_;
2580  this->handle_array_[0].chunk_state_.store(1);
2581  this->data_bytes_ = size()*sizeof(T);
2582  this->overhead_bytes_ = overheadBytesPerChunk();
2583  }
2584 
2585  ChunkedArrayFull(ChunkedArrayFull const & rhs)
2586  : ChunkedArray<N, T>(rhs),
2587  Storage(rhs),
2588  upper_bound_(rhs.upper_bound_),
2589  chunk_(detail::defaultStride(shape), this->data())
2590  {
2591  this->handle_array_[0].pointer_ = &chunk_;
2592  this->handle_array_[0].chunk_state_.store(1);
2593  }
2594 
2595  ChunkedArrayFull & operator=(ChunkedArrayFull const & rhs)
2596  {
2597  if(this != &rhs)
2598  {
2600  Storage::operator=(rhs);
2601  upper_bound_ = rhs.upper_bound_;
2602  }
2603  return *this;
2604  }
2605 
2606  ~ChunkedArrayFull()
2607  {}
2608 
2609  virtual shape_type chunkArrayShape() const
2610  {
2611  return shape_type(1);
2612  }
2613 
2614  virtual pointer loadChunk(ChunkBase<N, T> **, shape_type const &)
2615  {
2616  return this->data();
2617  }
2618 
2619  virtual bool unloadChunk(ChunkBase<N, T> *, bool /* destroy */)
2620  {
2621  return false; // never destroys the data
2622  }
2623 
2624  virtual std::size_t dataBytes(Chunk * c) const
2625  {
2626  return prod(this->shape());
2627  }
2628 
2629  virtual std::size_t overheadBytesPerChunk() const
2630  {
2631  return sizeof(Chunk) + sizeof(SharedChunkHandle<N, T>);
2632  }
2633 
2634  virtual pointer chunkForIterator(shape_type const & point,
2635  shape_type & strides, shape_type & upper_bound,
2636  IteratorChunkHandle<N, T> * h) const
2637  {
2638  shape_type global_point = point + h->offset_;
2639 
2640  if(!this->isInside(global_point))
2641  {
2642  upper_bound = point + this->chunk_shape_;
2643  return 0;
2644  }
2645 
2646  strides = this->stride();
2647  upper_bound = upper_bound_;
2648  return const_cast<pointer>(&Storage::operator[](global_point));
2649  }
2650 
2651  virtual pointer chunkForIterator(shape_type const & point,
2652  shape_type & strides, shape_type & upper_bound,
2653  IteratorChunkHandle<N, T> * h)
2654  {
2655  shape_type global_point = point + h->offset_;
2656 
2657  if(!this->isInside(global_point))
2658  {
2659  upper_bound = point + this->chunk_shape_;
2660  return 0;
2661  }
2662 
2663  strides = this->stride();
2664  upper_bound = upper_bound_;
2665  return &Storage::operator[](global_point);
2666  }
2667 
2668  virtual std::string backend() const
2669  {
2670  return "ChunkedArrayFull";
2671  }
2672 
2673  shape_type upper_bound_;
2674  Chunk chunk_; // a dummy chunk to fulfill the API
2675 };
2676 
2677 /** Implement ChunkedArray as a collection of in-memory chunks.
2678 
2679  This optimizes over an ordinary MultiArray by allocating chunks only
2680  upon the first write. This is especially useful when only a small
2681  part of the entire array is actually needed, e.g. in a data viewer.
2682 
2683  <b>\#include</b> <vigra/multi_array_chunked.hxx> <br/>
2684  Namespace: vigra
2685 */
2686 template <unsigned int N, class T, class Alloc = std::allocator<T> >
2688 : public ChunkedArray<N, T>
2689 {
2690  public:
2691 
2692  class Chunk
2693  : public ChunkBase<N, T>
2694  {
2695  public:
2696  typedef typename MultiArrayShape<N>::type shape_type;
2697  typedef T value_type;
2698  typedef value_type * pointer;
2699  typedef value_type & reference;
2700 
2701  Chunk(shape_type const & shape, Alloc const & alloc = Alloc())
2702  : ChunkBase<N, T>(detail::defaultStride(shape))
2703  , size_(prod(shape))
2704  , alloc_(alloc)
2705  {}
2706 
2707  ~Chunk()
2708  {
2709  deallocate();
2710  }
2711 
2712  pointer allocate()
2713  {
2714  if(this->pointer_ == 0)
2715  this->pointer_ = detail::alloc_initialize_n<T>(size_, T(), alloc_);
2716  return this->pointer_;
2717  }
2718 
2719  void deallocate()
2720  {
2721  detail::destroy_dealloc_n(this->pointer_, size_, alloc_);
2722  this->pointer_ = 0;
2723  }
2724 
2725  MultiArrayIndex size_;
2726  Alloc alloc_;
2727 
2728  private:
2729  Chunk & operator=(Chunk const &);
2730  };
2731 
2733  typedef typename ChunkStorage::difference_type shape_type;
2734  typedef T value_type;
2735  typedef value_type * pointer;
2736  typedef value_type & reference;
2737 
2738  /** \brief Construct with given 'shape', 'chunk_shape' and 'options',
2739  using the allocator 'alloc' to manage the memory.
2740  */
2741  explicit ChunkedArrayLazy(shape_type const & shape,
2742  shape_type const & chunk_shape=shape_type(),
2743  ChunkedArrayOptions const & options = ChunkedArrayOptions(),
2744  Alloc const & alloc = Alloc())
2745  : ChunkedArray<N, T>(shape, chunk_shape, options.cacheMax(0))
2746  , alloc_(alloc)
2747  {}
2748 
2749  ~ChunkedArrayLazy()
2750  {
2751  typename ChunkStorage::iterator i = this->handle_array_.begin(),
2752  end = this->handle_array_.end();
2753  for(; i != end; ++i)
2754  {
2755  if(i->pointer_)
2756  delete static_cast<Chunk*>(i->pointer_);
2757  i->pointer_ = 0;
2758  }
2759  }
2760 
2761  virtual pointer loadChunk(ChunkBase<N, T> ** p, shape_type const & index)
2762  {
2763  if(*p == 0)
2764  {
2765  *p = new Chunk(this->chunkShape(index));
2766  this->overhead_bytes_ += sizeof(Chunk);
2767  }
2768  return static_cast<Chunk *>(*p)->allocate();
2769  }
2770 
2771  virtual bool unloadChunk(ChunkBase<N, T> * chunk, bool destroy)
2772  {
2773  if(destroy)
2774  static_cast<Chunk *>(chunk)->deallocate();
2775  return destroy;
2776  }
2777 
2778  virtual std::string backend() const
2779  {
2780  return "ChunkedArrayLazy";
2781  }
2782 
2783  virtual std::size_t dataBytes(ChunkBase<N,T> * c) const
2784  {
2785  return c->pointer_ == 0
2786  ? 0
2787  : static_cast<Chunk*>(c)->size_*sizeof(T);
2788  }
2789 
2790  virtual std::size_t overheadBytesPerChunk() const
2791  {
2792  return sizeof(Chunk) + sizeof(SharedChunkHandle<N, T>);
2793  }
2794 
2795  Alloc alloc_;
2796 };
2797 
2798 /** Implement ChunkedArray as a collection of potentially compressed
2799  in-memory chunks.
2800 
2801  This works like \ref ChunkedArrayLazy, but inactive chunks are compressed
2802  when sent asleep. This is especially appropriate for highly compressible
2803  data such as label images.
2804 
2805  <b>\#include</b> <vigra/multi_array_chunked.hxx> <br/>
2806  Namespace: vigra
2807 */
2808 template <unsigned int N, class T, class Alloc = std::allocator<T> >
2810 : public ChunkedArray<N, T>
2811 {
2812  public:
2813 
2814  class Chunk
2815  : public ChunkBase<N, T>
2816  {
2817  public:
2818  typedef typename MultiArrayShape<N>::type shape_type;
2819  typedef T value_type;
2820  typedef value_type * pointer;
2821  typedef value_type & reference;
2822 
2823  Chunk(shape_type const & shape)
2824  : ChunkBase<N, T>(detail::defaultStride(shape))
2825  , compressed_()
2826  , size_(prod(shape))
2827  {}
2828 
2829  ~Chunk()
2830  {
2831  deallocate();
2832  }
2833 
2834  pointer allocate()
2835  {
2836  if(this->pointer_ == 0)
2837  this->pointer_ = detail::alloc_initialize_n<T>(size_, T(), alloc_);
2838  return this->pointer_;
2839  }
2840 
2841  void deallocate()
2842  {
2843  detail::destroy_dealloc_n(this->pointer_, size_, alloc_);
2844  this->pointer_ = 0;
2845  compressed_.clear();
2846  }
2847 
2848  void compress(CompressionMethod method)
2849  {
2850  if(this->pointer_ != 0)
2851  {
2852  vigra_invariant(compressed_.size() == 0,
2853  "ChunkedArrayCompressed::Chunk::compress(): compressed and uncompressed pointer are both non-zero.");
2854 
2855  ::vigra::compress((char const *)this->pointer_, size_*sizeof(T), compressed_, method);
2856 
2857  // std::cerr << "compression ratio: " << double(compressed_.size())/(this->size()*sizeof(T)) << "\n";
2858  detail::destroy_dealloc_n(this->pointer_, size_, alloc_);
2859  this->pointer_ = 0;
2860  }
2861  }
2862 
2863  pointer uncompress(CompressionMethod method)
2864  {
2865  if(this->pointer_ == 0)
2866  {
2867  if(compressed_.size())
2868  {
2869  this->pointer_ = alloc_.allocate((typename Alloc::size_type)size_);
2870 
2871  ::vigra::uncompress(compressed_.data(), compressed_.size(),
2872  (char*)this->pointer_, size_*sizeof(T), method);
2873  compressed_.clear();
2874  }
2875  else
2876  {
2877  this->pointer_ = allocate();
2878  }
2879  }
2880  else
2881  {
2882  vigra_invariant(compressed_.size() == 0,
2883  "ChunkedArrayCompressed::Chunk::uncompress(): compressed and uncompressed pointer are both non-zero.");
2884  }
2885  return this->pointer_;
2886  }
2887 
2888  ArrayVector<char> compressed_;
2889  MultiArrayIndex size_;
2890  Alloc alloc_;
2891 
2892  private:
2893  Chunk & operator=(Chunk const &);
2894  };
2895 
2896  typedef MultiArray<N, SharedChunkHandle<N, T> > ChunkStorage;
2897  typedef typename ChunkStorage::difference_type shape_type;
2898  typedef T value_type;
2899  typedef value_type * pointer;
2900  typedef value_type & reference;
2901 
2902  /** \brief Construct with given 'shape', 'chunk_shape' and 'options'.
2903 
2904  The most important option concerns the compression algorithm. Supported
2905  algorithms are:
2906  <ul>
2907  <li>LZ4: Very fast algorithm that achieves decent compression ratios.
2908  <li>ZLIB_FAST: Fast compression using 'zlib' (slower than LZ4, but higher compression).
2909  <li>ZLIB_BEST: Best compression using 'zlib', slow.
2910  <li>ZLIB_NONE: Use 'zlib' format without compression.
2911  <li>DEFAULT_COMPRESSION: Same as LZ4.
2912  </ul>
2913  */
2914  explicit ChunkedArrayCompressed(shape_type const & shape,
2915  shape_type const & chunk_shape=shape_type(),
2916  ChunkedArrayOptions const & options = ChunkedArrayOptions())
2917  : ChunkedArray<N, T>(shape, chunk_shape, options),
2918  compression_method_(options.compression_method)
2919  {
2920  if(compression_method_ == DEFAULT_COMPRESSION)
2921  compression_method_ = LZ4;
2922  }
2923 
2925  {
2926  typename ChunkStorage::iterator i = this->handle_array_.begin(),
2927  end = this->handle_array_.end();
2928  for(; i != end; ++i)
2929  {
2930  if(i->pointer_)
2931  delete static_cast<Chunk*>(i->pointer_);
2932  i->pointer_ = 0;
2933  }
2934  }
2935 
2936  virtual pointer loadChunk(ChunkBase<N, T> ** p, shape_type const & index)
2937  {
2938  if(*p == 0)
2939  {
2940  *p = new Chunk(this->chunkShape(index));
2941  this->overhead_bytes_ += sizeof(Chunk);
2942  }
2943  return static_cast<Chunk *>(*p)->uncompress(compression_method_);
2944  }
2945 
2946  virtual bool unloadChunk(ChunkBase<N, T> * chunk, bool destroy)
2947  {
2948  if(destroy)
2949  static_cast<Chunk *>(chunk)->deallocate();
2950  else
2951  static_cast<Chunk *>(chunk)->compress(compression_method_);
2952  return destroy;
2953  }
2954 
2955  virtual std::string backend() const
2956  {
2957  switch(compression_method_)
2958  {
2959  case ZLIB:
2960  return "ChunkedArrayCompressed<ZLIB>";
2961  case ZLIB_NONE:
2962  return "ChunkedArrayCompressed<ZLIB_NONE>";
2963  case ZLIB_FAST:
2964  return "ChunkedArrayCompressed<ZLIB_FAST>";
2965  case ZLIB_BEST:
2966  return "ChunkedArrayCompressed<ZLIB_BEST>";
2967  case LZ4:
2968  return "ChunkedArrayCompressed<LZ4>";
2969  default:
2970  return "unknown";
2971  }
2972  }
2973 
2974  virtual std::size_t dataBytes(ChunkBase<N,T> * c) const
2975  {
2976  return c->pointer_ == 0
2977  ? static_cast<Chunk*>(c)->compressed_.size()
2978  : static_cast<Chunk*>(c)->size_*sizeof(T);
2979  }
2980 
2981  virtual std::size_t overheadBytesPerChunk() const
2982  {
2983  return sizeof(Chunk) + sizeof(SharedChunkHandle<N, T>);
2984  }
2985 
2986  CompressionMethod compression_method_;
2987 };
2988 
2989 /** Implement ChunkedArray as a collection of chunks that can be
2990  swapped out into a temporary file when asleep.
2991 
2992  <b>\#include</b> <vigra/multi_array_chunked.hxx> <br/>
2993  Namespace: vigra
2994 
2995  The present implementation uses a memory-mapped sparse file to store the chunks.
2996  A sparse file is created on Linux using the O_TRUNC flag (this seems to be
2997  the default file behavior on Linux anyway), and on Windows by
2998  calling DeviceIoControl(file_handle, FSCTL_SET_SPARSE,...) after file creation.
2999 
3000  The file is automatically deleted upon closing. On Windows, this happens
3001  because the file was opened with FILE_FLAG_DELETE_ON_CLOSE in combination
3002  with the flag FILE_ATTRIBUTE_TEMPORARY, which tells the OS to avoid writing
3003  the file to disk if possible. (However, judging from the timings,
3004  something is still written, or cleanup takes considerable time.)
3005  On Linux, automated deletion is achieved via <tt>fileno(tmpfile())</tt>.
3006 */
3007 template <unsigned int N, class T>
3009 : public ChunkedArray<N, T>
3010 {
3011  /* REMARKS
3012 
3013  Alternatives are:
3014  * Don't create a file explicitly, but use the swap file instead. This is
3015  achieved on Linux by mmap(..., MAP_PRIVATE | MAP_ANONYMOUS, -1, ...),
3016  on Windows by calling CreateFileMapping(INVALID_HANDLE_VALUE, ...).
3017  * On Linux, the memory must not be unmapped because this
3018  looses the data. In fact, anonymous mmap() is very similar to
3019  malloc(), and there is probably no good reason to use anonymous mmap().
3020  * On Windows, this is much faster, because the OS will never try to
3021  actually write something to disk (unless swapping is necessary).
3022  */
3023  public:
3024 #ifdef _WIN32
3025  typedef HANDLE FileHandle;
3026 #else
3027  typedef int FileHandle;
3028 #endif
3029 
3030  class Chunk
3031  : public ChunkBase<N, T>
3032  {
3033  public:
3034  typedef typename MultiArrayShape<N>::type shape_type;
3035  typedef T value_type;
3036  typedef value_type * pointer;
3037  typedef value_type & reference;
3038 
3039  Chunk(shape_type const & shape,
3040  std::size_t offset, size_t alloc_size,
3041  FileHandle file)
3042  : ChunkBase<N, T>(detail::defaultStride(shape))
3043  , offset_(offset)
3044  , alloc_size_(alloc_size)
3045  , file_(file)
3046  {}
3047 
3048  ~Chunk()
3049  {
3050  unmap();
3051  }
3052 
3053  pointer map()
3054  {
3055  if(this->pointer_ == 0)
3056  {
3057  #ifdef _WIN32
3058  static const std::size_t bits = sizeof(DWORD)*8,
3059  mask = (std::size_t(1) << bits) - 1;
3060  this->pointer_ = (pointer)MapViewOfFile(file_, FILE_MAP_ALL_ACCESS,
3061  std::size_t(offset_) >> bits, offset_ & mask, alloc_size_);
3062  if(this->pointer_ == 0)
3063  winErrorToException("ChunkedArrayChunk::map(): ");
3064  #else
3065  this->pointer_ = (pointer)mmap(0, alloc_size_, PROT_READ | PROT_WRITE, MAP_SHARED,
3066  file_, offset_);
3067  if(this->pointer_ == 0)
3068  throw std::runtime_error("ChunkedArrayChunk::map(): mmap() failed.");
3069  #endif
3070  }
3071  return this->pointer_;
3072  }
3073 
3074  void unmap()
3075  {
3076  if(this->pointer_ != 0)
3077  {
3078  #ifdef _WIN32
3079  ::UnmapViewOfFile(this->pointer_);
3080  #else
3081  munmap(this->pointer_, alloc_size_);
3082  #endif
3083  this->pointer_ = 0;
3084  }
3085  }
3086 
3087  std::size_t offset_, alloc_size_;
3088  FileHandle file_;
3089 
3090  private:
3091  Chunk & operator=(Chunk const &);
3092  };
3093 
3094  typedef MultiArray<N, SharedChunkHandle<N, T> > ChunkStorage;
3096  typedef typename ChunkStorage::difference_type shape_type;
3097  typedef T value_type;
3098  typedef value_type * pointer;
3099  typedef value_type & reference;
3100 
3101  static std::size_t computeAllocSize(shape_type const & shape)
3102  {
3103  std::size_t size = prod(shape)*sizeof(T);
3104  std::size_t mask = mmap_alignment - 1;
3105  return (size + mask) & ~mask;
3106  }
3107 
3108  /** \brief Construct with given 'shape', 'chunk_shape' and 'options'.
3109 
3110  If the optional 'path' is given, the file is created in this directory.
3111  Otherwise (default), the path specified by the $TMP or $TEMP environment
3112  variables (in that order) is used.
3113  */
3114  explicit ChunkedArrayTmpFile(shape_type const & shape,
3115  shape_type const & chunk_shape=shape_type(),
3116  ChunkedArrayOptions const & options = ChunkedArrayOptions(),
3117  std::string const & path = "")
3118  : ChunkedArray<N, T>(shape, chunk_shape, options)
3119  #ifndef VIGRA_NO_SPARSE_FILE
3120  , offset_array_(this->chunkArrayShape())
3121  #endif
3122  , file_size_()
3123  , file_capacity_()
3124  {
3125  #ifdef VIGRA_NO_SPARSE_FILE
3126  file_capacity_ = 4*prod(this->chunk_shape_)*sizeof(T);
3127  #else
3128  // compute offset in file
3129  typename OffsetStorage::iterator i = offset_array_.begin(),
3130  end = offset_array_.end();
3131  std::size_t size = 0;
3132  for(; i != end; ++i)
3133  {
3134  *i = size;
3135  size += computeAllocSize(this->chunkShape(i.point()));
3136  }
3137  file_capacity_ = size;
3138  this->overhead_bytes_ += offset_array_.size()*sizeof(std::size_t);
3139  // std::cerr << " file size: " << size << "\n";
3140  #endif
3141 
3142  #ifdef _WIN32
3143  // create a temp file
3144  file_ = ::CreateFile(winTempFileName(path).c_str(), GENERIC_READ | GENERIC_WRITE,
3145  0, NULL, CREATE_ALWAYS, FILE_ATTRIBUTE_TEMPORARY | FILE_FLAG_DELETE_ON_CLOSE, NULL);
3146  if (file_ == INVALID_HANDLE_VALUE)
3147  winErrorToException("ChunkedArrayTmpFile(): ");
3148 
3149  // make it a sparse file
3150  DWORD dwTemp;
3151  if(!::DeviceIoControl(file_, FSCTL_SET_SPARSE, NULL, 0, NULL, 0, &dwTemp, NULL))
3152  winErrorToException("ChunkedArrayTmpFile(): ");
3153 
3154  // place the data in the swap file
3155  // file_ = INVALID_HANDLE_VALUE;
3156 
3157  // resize and memory-map the file
3158  static const std::size_t bits = sizeof(LONG)*8, mask = (std::size_t(1) << bits) - 1;
3159  mappedFile_ = CreateFileMapping(file_, NULL, PAGE_READWRITE,
3160  file_capacity_ >> bits, file_capacity_ & mask, NULL);
3161  if(!mappedFile_)
3162  winErrorToException("ChunkedArrayTmpFile(): ");
3163  #else
3164  mappedFile_ = file_ = fileno(tmpfile());
3165  if(file_ == -1)
3166  throw std::runtime_error("ChunkedArrayTmpFile(): unable to open file.");
3167  lseek(file_, file_capacity_-1, SEEK_SET);
3168  if(write(file_, "0", 1) == -1)
3169  throw std::runtime_error("ChunkedArrayTmpFile(): unable to resize file.");
3170  #endif
3171  }
3172 
3174  {
3175  typename ChunkStorage::iterator i = this->handle_array_.begin(),
3176  end = this->handle_array_.end();
3177  for(; i != end; ++i)
3178  {
3179  if(i->pointer_)
3180  delete static_cast<Chunk*>(i->pointer_);
3181  i->pointer_ = 0;
3182  }
3183  #ifdef _WIN32
3184  ::CloseHandle(mappedFile_);
3185  ::CloseHandle(file_);
3186  #else
3187  ::close(file_);
3188  #endif
3189  }
3190 
3191  virtual pointer loadChunk(ChunkBase<N, T> ** p, shape_type const & index)
3192  {
3193  if(*p == 0)
3194  {
3195  shape_type shape = this->chunkShape(index);
3196  std::size_t chunk_size = computeAllocSize(shape);
3197  #ifdef VIGRA_NO_SPARSE_FILE
3198  std::size_t offset = file_size_;
3199  if(offset + chunk_size > file_capacity_)
3200  {
3201  file_capacity_ = max<std::size_t>(offset+chunk_size, file_capacity_ * 120 / 100); // extend file by 20%
3202  if(lseek(file_, file_capacity_-1, SEEK_SET) == -1)
3203  throw std::runtime_error("ChunkedArrayTmpFile(): unable to reset file size.");
3204  if(write(file_, "0", 1) == -1)
3205  throw std::runtime_error("ChunkedArrayTmpFile(): unable to resize file.");
3206  }
3207  file_size_ += chunk_size;
3208  #else
3209  std::size_t offset = offset_array_[index];
3210  #endif
3211  *p = new Chunk(shape, offset, chunk_size, mappedFile_);
3212  this->overhead_bytes_ += sizeof(Chunk);
3213  }
3214  return static_cast<Chunk*>(*p)->map();
3215  }
3216 
3217  virtual bool unloadChunk(ChunkBase<N, T> * chunk, bool /* destroy*/)
3218  {
3219  static_cast<Chunk *>(chunk)->unmap();
3220  return false; // never destroys the data
3221  }
3222 
3223  virtual std::string backend() const
3224  {
3225  return "ChunkedArrayTmpFile";
3226  }
3227 
3228  virtual std::size_t dataBytes(ChunkBase<N,T> * c) const
3229  {
3230  return c->pointer_ == 0
3231  ? 0
3232  : static_cast<Chunk*>(c)->alloc_size_;
3233  }
3234 
3235  virtual std::size_t overheadBytesPerChunk() const
3236  {
3237  #ifdef VIGRA_NO_SPARSE_FILE
3238  return sizeof(Chunk) + sizeof(SharedChunkHandle<N, T>);
3239  #else
3240  return sizeof(Chunk) + sizeof(SharedChunkHandle<N, T>) + sizeof(std::size_t);
3241  #endif
3242  }
3243 
3244  #ifndef VIGRA_NO_SPARSE_FILE
3245  OffsetStorage offset_array_; // the array of chunks
3246  #endif
3247  FileHandle file_, mappedFile_; // the file back-end
3248  std::size_t file_size_, file_capacity_;
3249 };
3250 
3251 template<unsigned int N, class U>
3252 class ChunkIterator
3253 : public MultiCoordinateIterator<N>
3254 , private MultiArrayView<N, typename UnqualifiedType<U>::type>
3255 {
3256  public:
3257  typedef typename UnqualifiedType<U>::type T;
3258  typedef MultiCoordinateIterator<N> base_type;
3259  typedef MultiArrayView<N, T> base_type2;
3260 
3261  typedef typename base_type::shape_type shape_type;
3262  typedef typename base_type::difference_type difference_type;
3263  typedef ChunkIterator iterator;
3264  typedef std::random_access_iterator_tag iterator_category;
3265 
3266  typedef MultiArrayView<N, T> value_type;
3267  typedef MultiArrayView<N, T> & reference;
3268  typedef MultiArrayView<N, T> const & const_reference;
3269  typedef MultiArrayView<N, T> * pointer;
3270  typedef MultiArrayView<N, T> const * const_pointer;
3271 
3272  typedef typename IfBool<UnqualifiedType<U>::isConst,
3273  ChunkedArrayBase<N, T> const,
3274  ChunkedArrayBase<N, T> >::type array_type;
3275  typedef IteratorChunkHandle<N, T> Chunk;
3276 
3277 
3278  ChunkIterator()
3279  : base_type()
3280  , base_type2()
3281  {}
3282 
3283  ChunkIterator(array_type * array,
3284  shape_type const & start, shape_type const & end,
3285  shape_type const & chunk_start, shape_type const & chunk_end,
3286  shape_type const & chunk_shape)
3287  : base_type(chunk_start, chunk_end)
3288  , array_(array)
3289  , chunk_(chunk_start * chunk_shape)
3290  , start_(start - chunk_.offset_)
3291  , stop_(end - chunk_.offset_)
3292  , chunk_shape_(chunk_shape)
3293  {
3294  getChunk();
3295  }
3296 
3297  ChunkIterator(ChunkIterator const & rhs)
3298  : base_type(rhs)
3299  , base_type2(rhs)
3300  , array_(rhs.array_)
3301  , chunk_(rhs.chunk_)
3302  , start_(rhs.start_)
3303  , stop_(rhs.stop_)
3304  , chunk_shape_(rhs.chunk_shape_)
3305  {
3306  getChunk();
3307  }
3308 
3309  ChunkIterator & operator=(ChunkIterator const & rhs)
3310  {
3311  if(this != &rhs)
3312  {
3313  base_type::operator=(rhs);
3314  array_ = rhs.array_;
3315  chunk_ = rhs.chunk_;
3316  start_ = rhs.start_;
3317  stop_ = rhs.stop_;
3318  chunk_shape_ = rhs.chunk_shape_;
3319  getChunk();
3320  }
3321  return *this;
3322  }
3323 
3324  reference operator*()
3325  {
3326  return *this;
3327  }
3328 
3329  const_reference operator*() const
3330  {
3331  return *this;
3332  }
3333 
3334  pointer operator->()
3335  {
3336  return this;
3337  }
3338 
3339  const_pointer operator->() const
3340  {
3341  return this;
3342  }
3343 
3344  value_type operator[](MultiArrayIndex i) const
3345  {
3346  return *(ChunkIterator(*this) += i);
3347  }
3348 
3349  value_type operator[](const shape_type &coordOffset) const
3350  {
3351  return *(ChunkIterator(*this) += coordOffset);
3352  }
3353 
3354  void getChunk()
3355  {
3356  if(array_)
3357  {
3358  shape_type array_point = max(start_, this->point()*chunk_shape_),
3359  upper_bound(SkipInitialization);
3360  this->m_ptr = array_->chunkForIterator(array_point, this->m_stride, upper_bound, &chunk_);
3361  this->m_shape = min(upper_bound, stop_) - array_point;
3362  }
3363  }
3364 
3365  shape_type chunkStart() const
3366  {
3367  return max(start_, this->point()*chunk_shape_) + chunk_.offset_;
3368  }
3369 
3370  shape_type chunkStop() const
3371  {
3372  return chunkStart() + this->m_shape;
3373  }
3374 
3375  ChunkIterator & operator++()
3376  {
3377  base_type::operator++();
3378  getChunk();
3379  return *this;
3380  }
3381 
3382  ChunkIterator operator++(int)
3383  {
3384  ChunkIterator res(*this);
3385  ++*this;
3386  return res;
3387  }
3388 
3389  ChunkIterator & operator+=(MultiArrayIndex i)
3390  {
3392  getChunk();
3393  return *this;
3394  }
3395 
3396  ChunkIterator & operator+=(const shape_type &coordOffset)
3397  {
3398  base_type::operator+=(coordOffset);
3399  getChunk();
3400  return *this;
3401  }
3402 
3403  ChunkIterator & operator--()
3404  {
3405  base_type::operator--();
3406  getChunk();
3407  return *this;
3408  }
3409 
3410  ChunkIterator operator--(int)
3411  {
3412  ChunkIterator res(*this);
3413  --*this;
3414  return res;
3415  }
3416 
3417  ChunkIterator & operator-=(MultiArrayIndex i)
3418  {
3419  return operator+=(-i);
3420  }
3421 
3422  ChunkIterator & operator-=(const shape_type &coordOffset)
3423  {
3424  return operator+=(-coordOffset);
3425  }
3426 
3427  ChunkIterator getEndIterator() const
3428  {
3429  ChunkIterator res(*this);
3430  static_cast<base_type &>(res) = base_type::getEndIterator();
3431  res.getChunk();
3432  return res;
3433  }
3434 
3435  ChunkIterator operator+(MultiArrayIndex d) const
3436  {
3437  return ChunkIterator(*this) += d;
3438  }
3439 
3440  ChunkIterator operator-(MultiArrayIndex d) const
3441  {
3442  return ChunkIterator(*this) -= d;
3443  }
3444 
3445  ChunkIterator operator+(const shape_type &coordOffset) const
3446  {
3447  return ChunkIterator(*this) += coordOffset;
3448  }
3449 
3450  ChunkIterator operator-(const shape_type &coordOffset) const
3451  {
3452  return ChunkIterator(*this) -= coordOffset;
3453  }
3454 
3455  MultiArrayIndex operator-(const ChunkIterator & other) const
3456  {
3457  return base_type::operator-(other);
3458  }
3459 
3460 #ifndef DOXYGEN // doxygen doesn't understand this
3461  using base_type::operator==;
3462  using base_type::operator!=;
3463 #endif
3464  using base_type::shape;
3465 
3466  array_type * array_;
3467  Chunk chunk_;
3468  shape_type start_, stop_, chunk_shape_, array_point_;
3469 };
3470 
3471 //@}
3472 
3473 } // namespace vigra
3474 
3475 #undef VIGRA_ASSERT_INSIDE
3476 
3477 #endif /* VIGRA_MULTI_ARRAY_CHUNKED_HXX */
MultiArrayView< N-M, T, ChunkedArrayTag > bindOuter(const TinyVector< Index, M > &d) const
Create a lower dimensional view to the chunked array.
Definition: multi_array_chunked.hxx:2307
virtual std::size_t overheadBytesPerChunk() const
Bytes of main memory needed to manage a single chunk.
Definition: multi_array_chunked.hxx:2981
chunk_iterator chunk_end(shape_type const &start, shape_type const &stop)
Create the end iterator for iteration over all chunks intersected by the given ROI.
Definition: multi_array_chunked.hxx:2439
Sequential iterator for MultiArrayView.
Definition: multi_fwd.hxx:161
shape_type chunkStop(shape_type global_stop) const
Find the chunk that is beyond array element &#39;global_stop&#39;.
Definition: multi_array_chunked.hxx:1725
ChunkedArrayTmpFile(shape_type const &shape, shape_type const &chunk_shape=shape_type(), ChunkedArrayOptions const &options=ChunkedArrayOptions(), std::string const &path="")
Construct with given &#39;shape&#39;, &#39;chunk_shape&#39; and &#39;options&#39;.
Definition: multi_array_chunked.hxx:3114
void commitSubarray(shape_type const &start, MultiArrayView< N, U, Stride > const &subarray)
Copy an ordinary MultiArrayView into an ROI of the chunked array.
Definition: multi_array_chunked.hxx:2116
Option object for ChunkedArray construction.
Definition: multi_array_chunked.hxx:1274
view_type::pointer pointer
Definition: multi_array.hxx:2450
chunk_const_iterator chunk_end(shape_type const &start, shape_type const &stop) const
Create the end iterator for read-only iteration over all chunks intersected by the given ROI...
Definition: multi_array_chunked.hxx:2456
const_view_type const_subarray(shape_type const &start, shape_type const &stop) const
Create a read-only view to the specified ROI.
Definition: multi_array_chunked.hxx:2210
const_iterator end() const
Create the end iterator for read-only scan-order iteration over the entire chunked array...
Definition: multi_array_chunked.hxx:2423
void releaseChunks(shape_type const &start, shape_type const &stop, bool destroy=false)
Definition: multi_array_chunked.hxx:2053
ChunkedArrayCompressed(shape_type const &shape, shape_type const &chunk_shape=shape_type(), ChunkedArrayOptions const &options=ChunkedArrayOptions())
Construct with given &#39;shape&#39;, &#39;chunk_shape&#39; and &#39;options&#39;.
Definition: multi_array_chunked.hxx:2914
Diff2D operator-(Diff2D const &a, Diff2D const &b)
Definition: diff2d.hxx:711
Definition: multi_array_chunked.hxx:2809
chunk_iterator chunk_begin(shape_type const &start, shape_type const &stop)
Create an iterator over all chunks intersected by the given ROI.
Definition: multi_array_chunked.hxx:2430
ChunkedArrayFull(shape_type const &shape, ChunkedArrayOptions const &options=ChunkedArrayOptions(), Alloc const &alloc=Alloc())
Construct with given &#39;shape&#39; and &#39;options&#39;, using the allocator &#39;alloc&#39; to manage the memory...
Definition: multi_array_chunked.hxx:2571
const_iterator cbegin() const
Create a read-only scan-order iterator for the entire chunked array.
Definition: multi_array_chunked.hxx:2399
void linearSequence(Iterator first, Iterator last, Value start, Value step)
Fill an array with a sequence of numbers.
Definition: algorithm.hxx:208
std::size_t overheadBytes() const
Bytes of main memory needed to manage the chunked storage.
Definition: multi_array_chunked.hxx:1684
iterator begin()
Create a scan-order iterator for the entire chunked array.
Definition: multi_array_chunked.hxx:2383
iterator begin()
Definition: multi_array.hxx:1869
MultiArrayView< N-1, T, ChunkedArrayTag > bindInner(difference_type_1 index) const
Create a lower dimensional view to the chunked array.
Definition: multi_array_chunked.hxx:2327
virtual std::size_t overheadBytesPerChunk() const
Bytes of main memory needed to manage a single chunk.
Definition: multi_array_chunked.hxx:2790
void compress(char const *source, std::size_t size, ArrayVector< char > &dest, CompressionMethod method)
Diff2D operator+(Diff2D const &a, Diff2D const &b)
Definition: diff2d.hxx:739
ChunkedArrayLazy(shape_type const &shape, shape_type const &chunk_shape=shape_type(), ChunkedArrayOptions const &options=ChunkedArrayOptions(), Alloc const &alloc=Alloc())
Construct with given &#39;shape&#39;, &#39;chunk_shape&#39; and &#39;options&#39;, using the allocator &#39;alloc&#39; to manage the ...
Definition: multi_array_chunked.hxx:2741
MultiArrayView< N-1, T, ChunkedArrayTag > bindOuter(difference_type_1 index) const
Create a lower dimensional view to the chunked array.
Definition: multi_array_chunked.hxx:2295
MultiArrayView subarray(difference_type p, difference_type q) const
Definition: multi_array.hxx:1476
const_iterator cend() const
Create the end iterator for read-only scan-order iteration over the entire chunked array...
Definition: multi_array_chunked.hxx:2407
Interface and base class for chunked arrays.
Definition: multi_array_chunked.hxx:470
std::ptrdiff_t MultiArrayIndex
Definition: multi_fwd.hxx:60
bool allLess(TinyVectorBase< V1, SIZE, D1, D2 > const &l, TinyVectorBase< V2, SIZE, D3, D4 > const &r)
pointwise less-than
Definition: tinyvector.hxx:1375
Definition: multi_array_chunked.hxx:2522
Definition: accessor.hxx:43
void checkoutSubarray(shape_type const &start, MultiArrayView< N, U, Stride > &subarray) const
Copy an ROI of the chunked array into an ordinary MultiArrayView.
Definition: multi_array_chunked.hxx:2094
view_type::difference_type difference_type
Definition: multi_array.hxx:2470
FFTWComplex< R > & operator-=(FFTWComplex< R > &a, const FFTWComplex< R > &b)
subtract-assignment
Definition: fftw3.hxx:867
virtual std::size_t overheadBytesPerChunk() const
Bytes of main memory needed to manage a single chunk.
Definition: multi_array_chunked.hxx:3235
MultiArrayView< N-1, T, ChunkedArrayTag > bindAt(MultiArrayIndex dim, MultiArrayIndex index) const
Create a lower dimensional view to the chunked array.
Definition: multi_array_chunked.hxx:2267
ChunkedArrayOptions & compression(CompressionMethod v)
Compress inactive chunks with the given method.
Definition: multi_array_chunked.hxx:1319
std::size_t dataBytesPerChunk() const
Number of data bytes in an uncompressed chunk.
Definition: multi_array_chunked.hxx:1700
std::size_t cacheMaxSize() const
Get the number of chunks the cache will hold.
Definition: multi_array_chunked.hxx:2359
shape_type chunkShape(shape_type const &chunk_index) const
Find the shape of the chunk indexed by &#39;chunk_index&#39;.
Definition: multi_array_chunked.hxx:1739
void setCacheMaxSize(std::size_t c)
Set the number of chunks the cache will hold.
Definition: multi_array_chunked.hxx:2371
shape_type chunkStart(shape_type const &global_start) const
Find the chunk that contains array element &#39;global_start&#39;.
Definition: multi_array_chunked.hxx:1711
Definition: array_vector.hxx:58
FFTWComplex< R > & operator+=(FFTWComplex< R > &a, const FFTWComplex< R > &b)
add-assignment
Definition: fftw3.hxx:859
Definition: multi_fwd.hxx:63
view_type::reference reference
Definition: multi_array.hxx:2458
virtual shape_type chunkArrayShape() const
Number of chunks along each coordinate direction.
Definition: multi_array_chunked.hxx:1691
Int32 log2i(UInt32 x)
Compute the base-2 logarithm of an integer.
Definition: mathutil.hxx:344
NumericTraits< V >::Promote prod(TinyVectorBase< V, SIZE, D1, D2 > const &l)
product of the vector&#39;s elements
Definition: tinyvector.hxx:2097
const difference_type & shape() const
Definition: multi_array.hxx:1596
bool operator!=(FFTWComplex< R > const &a, const FFTWComplex< R > &b)
not equal
Definition: fftw3.hxx:841
iterator end()
Create the end iterator for scan-order iteration over the entire chunked array.
Definition: multi_array_chunked.hxx:2391
ChunkedArrayOptions & fillValue(double v)
Element value for read-only access of uninitialized chunks.
Definition: multi_array_chunked.hxx:1289
bool operator!=(MultiArrayView< N, U, C1 > const &rhs) const
Check if two arrays differ in at least one element.
Definition: multi_array_chunked.hxx:1802
bool operator==(FFTWComplex< R > const &a, const FFTWComplex< R > &b)
equal
Definition: fftw3.hxx:825
const_iterator begin() const
Create a read-only scan-order iterator for the entire chunked array.
Definition: multi_array_chunked.hxx:2415
TinyVector< MultiArrayIndex, N > type
Definition: multi_shape.hxx:250
view_type::size_type size_type
Definition: multi_array.hxx:2466
shape_type const & shape() const
Return the shape in this array.
bool operator==(MultiArrayView< N, U, C1 > const &rhs) const
Check if two arrays are elementwise equal.
Definition: multi_array_chunked.hxx:1787
ChunkedArrayOptions & cacheMax(int v)
Maximum number of chunks in the cache.
Definition: multi_array_chunked.hxx:1304
view_type::const_pointer const_pointer
Definition: multi_array.hxx:2454
view_type::value_type value_type
Definition: multi_array.hxx:2446
void setItem(shape_type const &point, value_type const &v)
Write the array element at index &#39;point&#39;.
Definition: multi_array_chunked.hxx:2246
Definition: multi_array_chunked.hxx:2687
Class for fixed size vectors.This class contains an array of size SIZE of the specified VALUETYPE...
Definition: accessor.hxx:940
Definition: metaprogramming.hxx:119
MultiArrayView< N-M, T, ChunkedArrayTag > bindInner(const TinyVector< Index, M > &d) const
Create a lower dimensional view to the chunked array.
Definition: multi_array_chunked.hxx:2339
view_type::difference_type_1 difference_type_1
Definition: multi_array.hxx:2474
virtual shape_type chunkArrayShape() const
Number of chunks along each coordinate direction.
Definition: multi_array_chunked.hxx:2609
virtual std::size_t overheadBytesPerChunk() const
Bytes of main memory needed to manage a single chunk.
Definition: multi_array_chunked.hxx:2629
detail::SelectIntegerType< 32, detail::UnsignedIntTypes >::type UInt32
32-bit unsigned int
Definition: sized_int.hxx:183
std::size_t dataBytes() const
Bytes of main memory occupied by the array&#39;s data.
Definition: multi_array_chunked.hxx:1677
Definition: multi_array_chunked.hxx:3008
Base class for, and view to, vigra::MultiArray.
Definition: multi_array.hxx:652
void uncompress(char const *source, std::size_t srcSize, char *dest, std::size_t destSize, CompressionMethod method)
int cacheSize() const
Number of chunks currently fitting into the cache.
Definition: multi_array_chunked.hxx:1667
chunk_const_iterator chunk_cbegin(shape_type const &start, shape_type const &stop) const
Create a read-only iterator over all chunks intersected by the given ROI.
Definition: multi_array_chunked.hxx:2464
bool allLessEqual(TinyVectorBase< V1, SIZE, D1, D2 > const &l, TinyVectorBase< V2, SIZE, D3, D4 > const &r)
pointwise less-equal
Definition: tinyvector.hxx:1399
view_type subarray(shape_type const &start, shape_type const &stop)
Create a view to the specified ROI.
Definition: multi_array_chunked.hxx:2182
UInt32 ceilPower2(UInt32 x)
Round up to the nearest power of 2.
Definition: mathutil.hxx:278
size_type size() const
Definition: array_vector.hxx:358
const_view_type subarray(shape_type const &start, shape_type const &stop) const
Create a read-only view to the specified ROI.
Definition: multi_array_chunked.hxx:2196
TinyVector< V, SIZE > transpose(TinyVector< V, SIZE > const &t, TinyVector< T, SIZE > const &permutation)
transposed copy
Definition: tinyvector.hxx:2251
chunk_const_iterator chunk_cend(shape_type const &start, shape_type const &stop) const
Create the end iterator for read-only iteration over all chunks intersected by the given ROI...
Definition: multi_array_chunked.hxx:2473
Iterate over a virtual array where each element contains its coordinate.
Definition: multi_fwd.hxx:157
chunk_const_iterator chunk_begin(shape_type const &start, shape_type const &stop) const
Create a read-only iterator over all chunks intersected by the given ROI.
Definition: multi_array_chunked.hxx:2447
value_type getItem(shape_type const &point) const
Read the array element at index &#39;point&#39;.
Definition: multi_array_chunked.hxx:2223
view_type::const_reference const_reference
Definition: multi_array.hxx:2462
MultiArrayView< N-1, T, ChunkedArrayTag > bind(difference_type_1 index) const
Create a lower dimensional view to the chunked array.
Definition: multi_array_chunked.hxx:2283
ChunkedArrayOptions()
Initialize options with defaults.
Definition: multi_array_chunked.hxx:1279

© Ullrich Köthe (ullrich.koethe@iwr.uni-heidelberg.de)
Heidelberg Collaboratory for Image Processing, University of Heidelberg, Germany

html generated using doxygen and Python
vigra 1.11.0