The Prelude: a standard module. The Prelude is imported by default into all Haskell modules unless either there is an explicit import statement for it, or the NoImplicitPrelude extension is enabled.
Internal modules are always subject to change from version to version.
Custom GHC Prelude This module serves as a replacement for the Prelude module and abstracts over differences between the bootstrapping GHC version, and may also provide a common default vocabulary.
General purpose utilities The names in this module clash heavily with the Haskell Prelude, so I recommend the following import scheme:
import Pipes
import qualified Pipes.Prelude as P  -- or use any other qualifier you prefer
Note that String-based IO is inefficient. The String-based utilities in this module exist only for simple demonstrations without incurring a dependency on the text package. Also, stdinLn and stdoutLn remove and add newlines, respectively. This behavior is intended to simplify examples. The corresponding stdin and stdout utilities from pipes-bytestring and pipes-text preserve newlines.
Mostly for compatibility across different base Prelude changes.
A module to re-export most of the functionality of the diagrams core and standard library.
The names exported by this module are closely modeled on those in Prelude and Data.List, but also on Pipes.Prelude, Pipes.Group and Pipes.Parse. The module may be said to give independent expression to the conception of Producer / Source / Generator manipulation articulated in the latter two modules. Because we dispense with piping and conduiting, the distinction between all of these modules collapses. Some things are lost but much is gained: on the one hand, everything comes much closer to ordinary beginning Haskell programming and, on the other, acquires the plasticity of programming directly with a general free monad type. The leading type, Stream (Of a) m r is chosen to permit an api that is as close as possible to that of Data.List and the Prelude. Import qualified thus:
import Streaming
import qualified Streaming.Prelude as S
For the examples below, one sometimes needs
import Streaming.Prelude (each, yield, next, mapped, stdoutLn, stdinLn)
import Data.Function ((&))
Other libraries that come up in passing are
import qualified Control.Foldl as L -- cabal install foldl
import qualified Pipes as P
import qualified Pipes.Prelude as P
import qualified System.IO as IO
Here are some correspondences between the types employed here and elsewhere:
streaming             |            pipes               |       conduit       |  io-streams
Stream (Of a) m ()                  | Producer a m ()                | Source m a          | InputStream a
| ListT m a                      | ConduitM () o m ()  | Generator r ()
Stream (Of a) m r                   | Producer a m r                 | ConduitM () o m r   | Generator a r
Stream (Of a) m (Stream (Of a) m r) | Producer a m (Producer a m r)  |
Stream (Stream (Of a) m) r          | FreeT (Producer a m) m r       |
ByteString m ()                     | Producer ByteString m ()       | Source m ByteString  | InputStream ByteString
This module provides a large suite of utilities that resemble Unix utilities. Many of these commands are just existing Haskell commands renamed to match their Unix counterparts:
>>> :set -XOverloadedStrings

>>> cd "/tmp"

>>> pwd
FilePath "/tmp"
Some commands are Shells that emit streams of values. view prints all values in a Shell stream:
>>> view (ls "/usr")
FilePath "/usr/lib"
FilePath "/usr/src"
FilePath "/usr/sbin"
FilePath "/usr/include"
FilePath "/usr/share"
FilePath "/usr/games"
FilePath "/usr/local"
FilePath "/usr/bin"

>>> view (find (suffix "") "/usr/lib")
FilePath "/usr/lib/python3.4/idlelib/"
FilePath "/usr/lib/python3.4/idlelib/"
FilePath "/usr/lib/python3.4/idlelib/"
FilePath "/usr/lib/python3.4/idlelib/"
Use fold to reduce the output of a Shell stream:
>>> import qualified Control.Foldl as Fold

>>> fold (ls "/usr") Fold.length

>>> fold (find (suffix "") "/usr/lib") Fold.head
Just (FilePath "/usr/lib/python3.4/idlelib/")
Create files using output:
>>> output "foo.txt" ("123" <|> "456" <|> "ABC")

>>> realpath "foo.txt"
FilePath "/tmp/foo.txt"
Read in files using input:
>>> stdout (input "foo.txt")
Format strings in a type safe way using format:
>>> dir <- pwd

>>> format ("I am in the "%fp%" directory") dir
"I am in the /tmp directory"
Commands like grep, sed and find accept arbitrary Patterns
>>> stdout (grep ("123" <|> "ABC") (input "foo.txt"))

>>> let exclaim = fmap (<> "!") (plus digit)

>>> stdout (sed exclaim (input "foo.txt"))
Note that grep and find differ from their Unix counterparts by requiring that the Pattern matches the entire line or file name by default. However, you can optionally match the prefix, suffix, or interior of a line:
>>> stdout (grep (has    "2") (input "foo.txt"))

>>> stdout (grep (prefix "1") (input "foo.txt"))

>>> stdout (grep (suffix "3") (input "foo.txt"))
You can also build up more sophisticated Shell programs using sh in conjunction with do notation:
{-# LANGUAGE OverloadedStrings #-}

import Turtle

main = sh example

example = do
-- Read in file names from "files1.txt" and "files2.txt"
file <- fmap fromText (input "files1.txt" <|> input "files2.txt")

-- Stream each file to standard output only if the file exists
True <- liftIO (testfile file)
line <- input file
liftIO (echo line)
See Turtle.Tutorial for an extended tutorial explaining how to use this library in greater detail.
Simple resource management functions
Low level functions using StreamK as the intermediate stream type. These functions are used in SerialTAsyncTAheadT/ParallelT stream modules to implement their instances..
To run examples in this module:
>>> import qualified Streamly.Data.Fold as Fold

>>> import qualified Streamly.Prelude as Stream
We will add some more imports in the examples as needed. For effectful streams we will use the following IO action that blocks for n seconds:
>>> import Control.Concurrent (threadDelay)

>>> :{
delay n = do
threadDelay (n * 1000000)   -- sleep for n seconds
putStrLn (show n ++ " sec") -- print "n sec"
return n                    -- IO Int
>>> delay 1
1 sec


Streamly is a framework for modular data flow based programming and declarative concurrency. Powerful stream fusion framework in streamly allows high performance combinatorial programming even when using byte level streams. Streamly API is similar to Haskell lists. The basic stream type is SerialT. The type SerialT IO a is an effectful equivalent of a list [a] using the IO monad. Streams can be constructed like lists, except that they use nil instead of '[]' and cons instead of :. cons constructs a pure stream which is more or less the same as a list:
>>> import Streamly.Prelude (SerialT, cons, consM, nil)

>>> stream = 1 `cons` 2 `cons` nil :: SerialT IO Int

>>> Stream.toList stream -- IO [Int]
consM constructs a stream from effectful actions:
>>> stream = delay 1 `consM` delay 2 `consM` nil

>>> Stream.toList stream
1 sec
2 sec

Console Echo Program

In the following example, repeatM generates an infinite stream of String by repeatedly performing the getLine IO action. mapM then applies putStrLn on each element in the stream converting it to stream of (). Finally, drain folds the stream to IO discarding the () values, thus producing only effects.
>>> import Data.Function ((&))
> :{
Stream.repeatM getLine      -- SerialT IO String
& Stream.mapM putStrLn  -- SerialT IO ()
& Stream.drain          -- IO ()
This is a console echo program. It is an example of a declarative loop written using streaming combinators. Compare it with an imperative while loop. Hopefully, this gives you an idea how we can program declaratively by representing loops using streams. In this module, you can find all Data.List like functions and many more powerful combinators to perform common programming tasks. Also see Streamly.Internal.Data.Stream.IsStream module for many more Pre-release combinators. See the repository for many more real world examples of stream programming.

Polymorphic Combinators

Streamly has several stream types, SerialT is one type of stream with serial execution of actions, AsyncT is another with concurrent execution. The combinators in this module are polymorphic in stream type. For example,
repeatM :: (IsStream t, MonadAsync m) => m a -> t m a
t is the stream type, m is the underlying Monad of the stream (e.g. IO) and a is the type of elements in the stream (e.g. Int). Stream elimination combinators accept a SerialT type instead of a polymorphic type to force a concrete monomorphic type by default, reducing type errors. That's why in the console echo example above the stream type is SerialT.
drain :: Monad m => SerialT m a -> m ()
We can force a certain stream type in polymorphic code by using "Stream Type Adaptors". For example, to force AsyncT:
>>> Stream.drain $ Stream.fromAsync $ Stream.replicateM 10 $ delay 1

Combining two streams

Two streams can be combined to form a single stream in various interesting ways. serial (append), wSerial (interleave), ahead (concurrent, ordered append), async (lazy concurrent, unordered append) , wAsync (lazy concurrent, unordered interleave), parallel (strict concurrent merge), zipWith, zipAsyncWith (concurrent zip), mergeBy, mergeAsyncBy (concurrent merge) are some ways of combining two streams. For example, the parallel combinator schedules both the streams concurrently.
>>> stream1 = Stream.fromListM [delay 3, delay 4]

>>> stream2 = Stream.fromListM [delay 1, delay 2]

>>> Stream.toList $ stream1 `parallel` stream2
We can chain the operations to combine more than two streams:
>>> stream3 = Stream.fromListM [delay 1, delay 2]

>>> Stream.toList $ stream1 `parallel` stream2 `parallel` stream3
Concurrent generation (consM) and concurrent merging of streams is the fundamental basis of all concurrency in streamly.

Combining many streams

The concatMapWith combinator can be used to generalize the two stream combining combinators to n streams. For example, we can use concatMapWith parallel to read concurrently from all incoming network connections and combine the input streams into a single output stream:
import qualified Streamly.Network.Inet.TCP as TCP
import qualified Streamly.Network.Socket as Socket

Stream.unfold TCP.acceptOnPort 8090
& Stream.concatMapWith Stream.parallel (Stream.unfold
See the streamly-examples repository for a full working example.

Concurrent Nested Loops

The Monad instance of SerialT is an example of nested looping. It is in fact a list transformer. Different stream types provide different variants of nested looping. For example, the Monad instance of ParallelT uses concatMapWith parallel as its bind operation. Therefore, each iteration of the loop for ParallelT stream can run concurrently. See the documentation for individual stream types for the specific execution behavior of the stream as well as the behavior of Semigroup and Monad instances.

Stream Types

Streamly has several stream types. These types differ in three fundamental operations, consM (IsStream instance), <> (Semigroup instance) and >>= (Monad instance). Below we will see how consM behaves for SerialT, AsyncT and AheadT stream types. SerialT executes actions serially, so the total delay in the following example is 2 + 1 = 3 seconds:
>>> stream = delay 2 `consM` delay 1 `consM` nil

>>> Stream.toList stream -- IO [Int]
2 sec
1 sec
AsyncT executes the actions concurrently, so the total delay is max 2 1 = 2 seconds:
>>> Stream.toList $ Stream.fromAsync stream -- IO [Int]
1 sec
2 sec
AsyncT produces the results in the order in which execution finishes. Notice the order of elements in the list above, it is not the same as the order of actions in the stream. AheadT is similar to AsyncT but the order of results is the same as the order of actions, even though they execute concurrently:
>>> Stream.toList $ Stream.fromAhead stream -- IO [Int]
1 sec
2 sec

Semigroup Instance

Earlier we distinguished stream types based on the execution behavior of actions within a stream. Stream types are also distinguished based on how actions from different streams are scheduled for execution when two streams are combined together. For example, both SerialT and WSerialT execute actions within the stream serially, however, they differ in how actions from individual streams are executed when two streams are combined with <> (the Semigroup instance). For SerialT, <> has an appending behavior i.e. it executes the actions from the second stream after executing actions from the first stream:
>>> stream1 = Stream.fromListM [delay 1, delay 2]

>>> stream2 = Stream.fromListM [delay 3, delay 4]

>>> Stream.toList $ stream1 <> stream2
1 sec
2 sec
3 sec
4 sec
For WSerialT, <> has an interleaving behavior i.e. it executes one action from the first stream and then one action from the second stream and so on:
>>> Stream.toList $ Stream.fromWSerial $ stream1 <> stream2
1 sec
3 sec
2 sec
4 sec
The <> operation of SerialT and WSerialT is the same as serial and wSerial respectively. The serial combinator combines two streams of any type in the same way as a serial stream combines.

Concurrent Combinators

Like consM, there are several other stream generation operations whose execution behavior depends on the stream type, they all follow behavior similar to consM. By default, folds like drain force the stream type to be SerialT, so replicateM in the following code runs serially, and takes 10 seconds:
>>> Stream.drain $ Stream.replicateM 10 $ delay 1
We can use the fromAsync combinator to force the argument stream to be of AsyncT type, replicateM in the following example executes the replicated actions concurrently, thus taking only 1 second:
>>> Stream.drain $ Stream.fromAsync $ Stream.replicateM 10 $ delay 1
We can use mapM to map an action concurrently:
>>> f x = delay 1 >> return (x + 1)

>>> Stream.toList $ Stream.fromAhead $ Stream.mapM f $ Stream.fromList [1..3]
fromAhead forces mapM to happen in AheadT style, thus all three actions take only one second even though each individual action blocks for a second. See the documentation of individual combinators to check if it is concurrent or not. The concurrent combinators necessarily have a MonadAsync m constraint. However, a MonadAsync m constraint does not necessarily mean that the combinator is concurrent.

Automatic Concurrency Control

SerialT (and WSerialT) runs all tasks serially whereas ParallelT runs all tasks concurrently i.e. one thread per task. The stream types AsyncT, WAsyncT, and AheadT provide demand driven concurrency. It means that based on the rate at which the consumer is consuming the stream, it maintains the optimal number of threads to increase or decrease parallelism. However, the programmer can control the maximum number of threads using maxThreads. It provides an upper bound on the concurrent IO requests or CPU cores that can be used. maxBuffer limits the number of evaluated stream elements that we can buffer. See the "Concurrency Control" section for details.


When we use combinators like fromAsync on a piece of code, all combinators inside the argument of fromAsync become concurrent which is often counter productive. Therefore, we recommend that in a pipeline, you identify the combinators that you really want to be concurrent and add a fromSerial after those combinators so that the code following the combinator remains serial:
Stream.fromAsync $ ... concurrent combinator here ... $ Stream.fromSerial $ ...


Functions with the suffix M are general functions that work on monadic arguments. The corresponding functions without the suffix M work on pure arguments and can in general be derived from their monadic versions but are provided for convenience and for consistency with other pure APIs in the base package. In many cases, short definitions of the combinators are provided in the documentation for illustration. The actual implementation may differ for performance reasons.
This module may change between minor releases. Do not rely on its contents.
This module defines the explicitly clocked counterparts of the functions defined in Clash.Prelude.
Clash is a functional hardware description language that borrows both its syntax and semantics from the functional programming language Haskell. The merits of using a functional language to describe hardware comes from the fact that combinational circuits can be directly modeled as mathematical functions and that functional languages lend themselves very well at describing and (de-)composing mathematical functions. This package provides:
  • Prelude library containing datatypes and functions for circuit design
To use the library: For now, Clash.Prelude is also the best starting point for exploring the library. A preliminary version of a tutorial can be found in Clash.Tutorial. Some circuit examples can be found in Clash.Examples.


This module supplies a convenient set of imports for working with the dimensional package, including aliases for common Quantitys and Dimensions, and a comprehensive set of SI units and units accepted for use with the SI. It re-exports the Prelude, hiding arithmetic functions whose names collide with the dimensionally-typed versions supplied by this package.
Generic deriving for standard classes in base


This is an internal module: it is not subject to any versioning policy, breaking changes can happen at any time. If something here seems useful, please report it or create a pull request to export it from an external module.
This module is presents a prelude mostly like the post-Applicative-Monad world of base >= 4.8 / ghc >= 7.10, as well as the post-Semigroup-Monoid world of base >= 4.11 / ghc >= 8.4, even on earlier versions. It's intended as an internal library for llvm-hs-pure and llvm-hs; it's exposed only to be shared between the two.
This module reexports the non-conflicting definitions from the modules exported by this package, providing a much more featureful alternative to the standard Prelude. For details check out the source.
| Copyright: (C) 2013 Amgen, Inc. DEPRECATED: use Language.R instead.
Utility functions and re-exports for a more ergonomic developing experience. Users themselves will not find much use here.