Skip to content

Effect 3.5 (Release)

Effect 3.5.0 has been released! This release includes a number of new features and improvements. Here’s a summary of what’s new:

If you add a cause property to Data.Error or Data.TaggedError, it will now be properly forwarded to the cause property of the Error instance.

1
import { Data } from "effect"
2
3
class MyError extends Data.Error<{ cause: Error }> {}

If you Effect.log a Cause containing an error with a cause property, it will now be visible in the log output.

The @effect/sql-d1 package has been released. This package provides @effect/sql support for Cloudflare’s D1 database.

RcRef and RcMap are new reference counted types that can be used to manage resources.

The wrapped resource will be acquired on the first access and released when no longer in use.

RcRef can be used to manage a single resource, and RcMap can be used to manage multiple resources referenced by a key.

1
import { Effect, RcMap } from "effect"
2
3
Effect.gen(function* () {
4
const map = yield* RcMap.make({
5
lookup: (key: string) =>
6
Effect.acquireRelease(Effect.succeed(`acquired ${key}`), () =>
7
Effect.log(`releasing ${key}`),
8
)
9
})
10
11
// Get "foo" from the map twice, which will only acquire it once
12
// It will then be released once the scope closes.
13
yield* RcMap.get(map, "foo").pipe(
14
Effect.andThen(RcMap.get(map, "foo")),
15
Effect.scoped
16
)
17
})

Logger.pretty is a new logger that leverages the features of the console APIs to provide a more visually appealing output.

To try it out, provide it to your program:

1
import { Effect, Logger } from "effect"
2
3
Effect.log("Hello, World!").pipe(Effect.provide(Logger.pretty))

In Effect 4.0, Logger.pretty will become default logger.

The replay option adds a replay buffer in front of the given PubSub. The buffer will replay the last n messages to any new subscriber.

1
Effect.gen(function*() {
2
const messages = [1, 2, 3, 4, 5]
3
const pubsub = yield* PubSub.bounded<number>({ capacity: 16, replay: 3 })
4
yield* PubSub.publishAll(pubsub, messages)
5
const sub = yield* PubSub.subscribe(pubsub)
6
assert.deepStrictEqual(Chunk.toReadonlyArray(yield* Queue.takeAll(sub)), [3, 4, 5])
7
})

Stream.raceAll races the given streams, with the first stream to emit an item declared the winner. The resulting stream will emit the items from the winning stream.

1
import { Stream, Schedule, Console, Effect } from "effect"
2
3
const stream = Stream.raceAll(
4
Stream.fromSchedule(Schedule.spaced("1 millis")),
5
Stream.fromSchedule(Schedule.spaced("2 millis")),
6
Stream.fromSchedule(Schedule.spaced("4 millis"))
7
).pipe(Stream.take(6), Stream.tap(Console.log))
8
9
Effect.runPromise(Stream.runDrain(stream))
10
// Output only from the first stream, the rest streams are interrupted
11
// 0
12
// 1
13
// 2
14
// 3
15
// 4
16
// 5

Random.make creates a new instance of the Random service from a seed value.

It will calculate the hash of the seed value, and use that to seed the random number generator.

You can now customize the output buffer options for Stream.async*:

1
import { Stream } from "effect"
2
3
Stream.async((emit) => {
4
// ...
5
}, { bufferSize: 16, strategy: "dropping" })

You can now customize the strategy and capacity of the underlying PubSub in the following Stream apis:

  • Stream.toPubSub
  • Stream.broadcast*
1
import { Schedule, Stream } from "effect"
2
3
// toPubSub
4
Stream.fromSchedule(Schedule.spaced(1000)).pipe(
5
Stream.toPubSub({
6
capacity: 16, // or "unbounded"
7
strategy: "dropping", // or "sliding" / "suspend"
8
}),
9
)
10
11
// also for the broadcast apis
12
Stream.fromSchedule(Schedule.spaced(1000)).pipe(
13
Stream.broadcastDynamic({
14
capacity: 16,
15
strategy: "dropping",
16
}),
17
)
  • Stream & Channel run* methods now exclude Scope from the R type.
  • Use of Stream.DynamicTuple has been replaced with Types.TupleOf.
  • Use left / right naming instead of self / that in Stream.mergeRight & Stream.mergeLeft.

There were several other smaller changes made. Take a look through the CHANGELOG to see them all: CHANGELOG.

Don’t forget to join our Discord Community to follow the last updates and discuss every tiny detail!