defread_chunks(data, chunk_size):
for i inrange(0, len(data), chunk_size):
yield data[i:i + chunk_size]
data = list(range(100))
for chunk in read_chunks(data, 15):
print(f"Processing chunk of {len(chunk)}: sum={sum(chunk)}")
# Generator expressions (like list comp but lazy)
total = sum(x**2for x inrange(1_000_000))
print(f"Sum of squares: {total}")
# Chaining generatorsdefevens(source):
for x in source:
if x % 2 == 0:
yield x
defdoubled(source):
for x in source:
yield x * 2
pipeline = doubled(evens(range(10)))
print(list(pipeline))
Output
Click "Run" to execute your code
Generators process one item at a time, using constant memory regardless of data size. Chain them to build data processing pipelines.
Challenge
Try modifying the code above to explore different behaviors. Can you extend the example to handle a new use case?