宽容的消炎药 · “野生动物”的概念框架和术语定义· 1 月前 · |
不拘小节的野马 · 日本市场也不景气,游戏改编动画的前路何在? ...· 1 年前 · |
着急的杨桃 · 网络配音软件_网络配音平台-知意配音网· 1 年前 · |
public interface Stream<T> extends BaseStream<T,Stream<T>>
Stream
and
IntStream
:
int sum = widgets.stream()
.filter(w -> w.getColor() == RED)
.mapToInt(w -> w.getWeight())
.sum();
In this example,
widgets
is a
Collection<Widget>
. We create
a stream of
Widget
objects via
Collection.stream()
,
filter it to produce a stream containing only the red widgets, and then
transform it into a stream of
int
values representing the weight of
each red widget. Then this stream is summed to produce a total weight.
In addition to
Stream
, which is a stream of object references,
there are primitive specializations for
IntStream
,
LongStream
,
and
DoubleStream
, all of which are referred to as "streams" and
conform to the characteristics and restrictions described here.
To perform a computation, stream
operations
are composed into a
stream pipeline
. A stream pipeline consists of a source (which
might be an array, a collection, a generator function, an I/O channel,
etc), zero or more
intermediate operations
(which transform a
stream into another stream, such as
filter(Predicate)
), and a
terminal operation
(which produces a result or side-effect, such
as
count()
or
forEach(Consumer)
).
Streams are lazy; computation on the source data is only performed when the
terminal operation is initiated, and source elements are consumed only
as needed.
Collections and streams, while bearing some superficial similarities,
have different goals. Collections are primarily concerned with the efficient
management of, and access to, their elements. By contrast, streams do not
provide a means to directly access or manipulate their elements, and are
instead concerned with declaratively describing their source and the
computational operations which will be performed in aggregate on that source.
However, if the provided stream operations do not offer the desired
functionality, the
BaseStream.iterator()
and
BaseStream.spliterator()
operations
can be used to perform a controlled traversal.
A stream pipeline, like the "widgets" example above, can be viewed as
a
query
on the stream source. Unless the source was explicitly
designed for concurrent modification (such as a
ConcurrentHashMap
),
unpredictable or erroneous behavior may result from modifying the stream
source while it is being queried.
Most stream operations accept parameters that describe user-specified
behavior, such as the lambda expression
w -> w.getWeight()
passed to
mapToInt
in the example above. To preserve correct behavior,
these
behavioral parameters
:
Such parameters are always instances of a
functional interface
such
as
Function
, and are often lambda expressions or
method references. Unless otherwise specified these parameters must be
non-null
.
A stream should be operated on (invoking an intermediate or terminal stream
operation) only once. This rules out, for example, "forked" streams, where
the same source feeds two or more pipelines, or multiple traversals of the
same stream. A stream implementation may throw
IllegalStateException
if it detects that the stream is being reused. However, since some stream
operations may return their receiver rather than a new stream object, it may
not be possible to detect reuse in all cases.
Streams have a
BaseStream.close()
method and implement
AutoCloseable
,
but nearly all stream instances do not actually need to be closed after use.
Generally, only streams whose source is an IO channel (such as those returned
by
Files.lines(Path, Charset)
) will require closing. Most streams
are backed by collections, arrays, or generating functions, which require no
special resource management. (If a stream does require closing, it can be
declared as a resource in a
try
-with-resources statement.)
Stream pipelines may execute either sequentially or in
parallel
. This
execution mode is a property of the stream. Streams are created
with an initial choice of sequential or parallel execution. (For example,
Collection.stream()
creates a sequential stream,
and
Collection.parallelStream()
creates
a parallel one.) This choice of execution mode may be modified by the
BaseStream.sequential()
or
BaseStream.parallel()
methods, and may be queried with
the
BaseStream.isParallel()
method.
IntStream
,
LongStream
,
DoubleStream
,
java.util.stream
boolean
allMatch
(
Predicate
<? super
T
> predicate)
boolean
anyMatch
(
Predicate
<? super
T
> predicate)
static <T>
Stream.Builder
<T>
builder
()
Stream
.
<R,A> R
collect
(
Collector
<? super
T
,A,R> collector)
Collector
.
collect
(
Supplier
<R> supplier,
BiConsumer
<R,? super
T
> accumulator,
BiConsumer
<R,R> combiner)
static <T>
Stream
<T>
concat
(
Stream
<? extends T> a,
Stream
<? extends T> b)
count
()
Stream
<
T
>
distinct
()
Object.equals(Object)
) of this stream.
static <T>
Stream
<T>
empty
()
Stream
.
Stream
<
T
>
filter
(
Predicate
<? super
T
> predicate)
Optional
<
T
>
findAny
()
Optional
describing some element of the stream, or an
empty
Optional
if the stream is empty.
Optional
<
T
>
findFirst
()
Optional
describing the first element of this stream,
or an empty
Optional
if the stream is empty.
<R>
Stream
<R>
flatMap
(
Function
<? super
T
,? extends
Stream
<? extends R>> mapper)
DoubleStream
flatMapToDouble
(
Function
<? super
T
,? extends
DoubleStream
> mapper)
DoubleStream
consisting of the results of replacing
each element of this stream with the contents of a mapped stream produced
by applying the provided mapping function to each element.
IntStream
flatMapToInt
(
Function
<? super
T
,? extends
IntStream
> mapper)
IntStream
consisting of the results of replacing each
element of this stream with the contents of a mapped stream produced by
applying the provided mapping function to each element.
LongStream
flatMapToLong
(
Function
<? super
T
,? extends
LongStream
> mapper)
LongStream
consisting of the results of replacing each
element of this stream with the contents of a mapped stream produced by
applying the provided mapping function to each element.
forEach
(
Consumer
<? super
T
> action)
forEachOrdered
(
Consumer
<? super
T
> action)
static <T>
Stream
<T>
generate
(
Supplier
<T> s)
Supplier
.
static <T>
Stream
<T>
iterate
(T seed,
UnaryOperator
<T> f)
Stream
produced by iterative
application of a function
f
to an initial element
seed
,
producing a
Stream
consisting of
seed
,
f(seed)
,
f(f(seed))
, etc.
Stream
<
T
>
limit
(long maxSize)
maxSize
in length.
<R>
Stream
<R>
map
(
Function
<? super
T
,? extends R> mapper)
DoubleStream
mapToDouble
(
ToDoubleFunction
<? super
T
> mapper)
DoubleStream
consisting of the results of applying the
given function to the elements of this stream.
IntStream
mapToInt
(
ToIntFunction
<? super
T
> mapper)
IntStream
consisting of the results of applying the
given function to the elements of this stream.
LongStream
mapToLong
(
ToLongFunction
<? super
T
> mapper)
LongStream
consisting of the results of applying the
given function to the elements of this stream.
Optional
<
T
>
max
(
Comparator
<? super
T
> comparator)
Comparator
.
Optional
<
T
>
min
(
Comparator
<? super
T
> comparator)
Comparator
.
boolean
noneMatch
(
Predicate
<? super
T
> predicate)
static <T>
Stream
<T>
of
(T... values)
static <T>
Stream
<T>
of
(T t)
Stream
containing a single element.
Stream
<
T
>
peek
(
Consumer
<? super
T
> action)
Optional
<
T
>
reduce
(
BinaryOperator
<
T
> accumulator)
Optional
describing the reduced value,
if any.
reduce
(
T
identity,
BinaryOperator
<
T
> accumulator)
reduce
(U identity,
BiFunction
<U,? super
T
,U> accumulator,
BinaryOperator
<U> combiner)
Stream
<
T
>
skip
(long n)
n
elements of the stream.
Stream
<
T
>
sorted
()
Stream
<
T
>
sorted
(
Comparator
<? super
T
> comparator)
Comparator
.
Object
[]
toArray
()
<A> A[]
toArray
(
IntFunction
<A[]> generator)
generator
function to allocate the returned array, as
well as any additional arrays that might be required for a partitioned
execution or for resizing.
Stream<T> filter(Predicate<? super T> predicate)
This is an intermediate operation .
predicate
- a
non-interfering
,
stateless
predicate to apply to each element to determine if it
should be included
<R> Stream<R> map(Function<? super T,? extends R> mapper)
This is an intermediate operation .
R
- The element type of the new stream
mapper
- a
non-interfering
,
stateless
function to apply to each element
IntStream mapToInt(ToIntFunction<? super T> mapper)
IntStream
consisting of the results of applying the
given function to the elements of this stream.
This is an intermediate operation .
mapper
- a
non-interfering
,
stateless
function to apply to each element
LongStream mapToLong(ToLongFunction<? super T> mapper)
LongStream
consisting of the results of applying the
given function to the elements of this stream.
This is an intermediate operation .
mapper
- a
non-interfering
,
stateless
function to apply to each element
DoubleStream mapToDouble(ToDoubleFunction<? super T> mapper)
DoubleStream
consisting of the results of applying the
given function to the elements of this stream.
This is an intermediate operation .
mapper
- a
non-interfering
,
stateless
function to apply to each element
<R> Stream<R> flatMap(Function<? super T,? extends Stream<? extends R>> mapper)
closed
after its contents
have been placed into this stream. (If a mapped stream is
null
an empty stream is used, instead.)
This is an intermediate operation .
flatMap()
operation has the effect of applying a one-to-many
transformation to the elements of the stream, and then flattening the
resulting elements into a new stream.
Examples.
If
orders
is a stream of purchase orders, and each purchase
order contains a collection of line items, then the following produces a
stream containing all the line items in all the orders:
orders.flatMap(order -> order.getLineItems().stream())...
If
path
is the path to a file, then the following produces a
stream of the
words
contained in that file:
Stream<String> lines = Files.lines(path, StandardCharsets.UTF_8);
Stream<String> words = lines.flatMap(line -> Stream.of(line.split(" +")));
The
mapper
function passed to
flatMap
splits a line,
using a simple regular expression, into an array of words, and then
creates a stream of words from that array.
R
- The element type of the new stream
mapper
- a
non-interfering
,
stateless
function to apply to each element which produces a stream
of new values
IntStream flatMapToInt(Function<? super T,? extends IntStream> mapper)
IntStream
consisting of the results of replacing each
element of this stream with the contents of a mapped stream produced by
applying the provided mapping function to each element. Each mapped
stream is
closed
after its
contents have been placed into this stream. (If a mapped stream is
null
an empty stream is used, instead.)
This is an intermediate operation .
mapper
- a
non-interfering
,
stateless
function to apply to each element which produces a stream
of new values
flatMap(Function)
LongStream flatMapToLong(Function<? super T,? extends LongStream> mapper)
LongStream
consisting of the results of replacing each
element of this stream with the contents of a mapped stream produced by
applying the provided mapping function to each element. Each mapped
stream is
closed
after its
contents have been placed into this stream. (If a mapped stream is
null
an empty stream is used, instead.)
This is an intermediate operation .
mapper
- a
non-interfering
,
stateless
function to apply to each element which produces a stream
of new values
flatMap(Function)
DoubleStream flatMapToDouble(Function<? super T,? extends DoubleStream> mapper)
DoubleStream
consisting of the results of replacing
each element of this stream with the contents of a mapped stream produced
by applying the provided mapping function to each element. Each mapped
stream is
closed
after its
contents have placed been into this stream. (If a mapped stream is
null
an empty stream is used, instead.)
This is an intermediate operation .
mapper
- a
non-interfering
,
stateless
function to apply to each element which produces a stream
of new values
flatMap(Function)
Stream<T> distinct()
Object.equals(Object)
) of this stream.
For ordered streams, the selection of distinct elements is stable (for duplicated elements, the element appearing first in the encounter order is preserved.) For unordered streams, no stability guarantees are made.
This is a stateful intermediate operation .
distinct()
in parallel pipelines is
relatively expensive (requires that the operation act as a full barrier,
with substantial buffering overhead), and stability is often not needed.
Using an unordered stream source (such as
generate(Supplier)
)
or removing the ordering constraint with
BaseStream.unordered()
may result
in significantly more efficient execution for
distinct()
in parallel
pipelines, if the semantics of your situation permit. If consistency
with encounter order is required, and you are experiencing poor performance
or memory utilization with
distinct()
in parallel pipelines,
switching to sequential execution with
BaseStream.sequential()
may improve
performance.
Stream<T> sorted()
Comparable
, a
java.lang.ClassCastException
may be thrown
when the terminal operation is executed.
For ordered streams, the sort is stable. For unordered streams, no stability guarantees are made.
This is a stateful intermediate operation .
Stream<T> sorted(Comparator<? super T> comparator)
Comparator
.
For ordered streams, the sort is stable. For unordered streams, no stability guarantees are made.
This is a stateful intermediate operation .
comparator
- a
non-interfering
,
stateless
Comparator
to be used to compare stream elements
Stream<T> peek(Consumer<? super T> action)
This is an intermediate operation .
For parallel stream pipelines, the action may be called at whatever time and in whatever thread the element is made available by the upstream operation. If the action modifies shared state, it is responsible for providing the required synchronization.
action
- a
non-interfering
action to perform on the elements as
they are consumed from the stream
Stream<T> limit(long maxSize)
maxSize
in length.
This is a short-circuiting stateful intermediate operation .
limit()
is generally a cheap operation on sequential
stream pipelines, it can be quite expensive on ordered parallel pipelines,
especially for large values of
maxSize
, since
limit(n)
is constrained to return not just any
n
elements, but the
first n
elements in the encounter order. Using an unordered
stream source (such as
generate(Supplier)
) or removing the
ordering constraint with
BaseStream.unordered()
may result in significant
speedups of
limit()
in parallel pipelines, if the semantics of
your situation permit. If consistency with encounter order is required,
and you are experiencing poor performance or memory utilization with
limit()
in parallel pipelines, switching to sequential execution
with
BaseStream.sequential()
may improve performance.
maxSize
- the number of elements the stream should be limited to
IllegalArgumentException
- if
maxSize
is negative
Stream<T> skip(long n)
n
elements of the stream.
If this stream contains fewer than
n
elements then an
empty stream will be returned.
This is a stateful intermediate operation .
skip()
is generally a cheap operation on sequential
stream pipelines, it can be quite expensive on ordered parallel pipelines,
especially for large values of
n
, since
skip(n)
is constrained to skip not just any
n
elements, but the
first n
elements in the encounter order. Using an unordered
stream source (such as
generate(Supplier)
) or removing the
ordering constraint with
BaseStream.unordered()
may result in significant
speedups of
skip()
in parallel pipelines, if the semantics of
your situation permit. If consistency with encounter order is required,
and you are experiencing poor performance or memory utilization with
skip()
in parallel pipelines, switching to sequential execution
with
BaseStream.sequential()
may improve performance.
n
- the number of leading elements to skip
IllegalArgumentException
- if
n
is negative
void forEach(Consumer<? super T> action)
This is a terminal operation .
The behavior of this operation is explicitly nondeterministic. For parallel stream pipelines, this operation does not guarantee to respect the encounter order of the stream, as doing so would sacrifice the benefit of parallelism. For any given element, the action may be performed at whatever time and in whatever thread the library chooses. If the action accesses shared state, it is responsible for providing the required synchronization.
action
- a
non-interfering
action to perform on the elements
void forEachOrdered(Consumer<? super T> action)
This is a terminal operation .
This operation processes the elements one at a time, in encounter order if one exists. Performing the action for one element happens-before performing the action for subsequent elements, but for any given element, the action may be performed in whatever thread the library chooses.
action
- a
non-interfering
action to perform on the elements
forEach(Consumer)
<A> A[] toArray(IntFunction<A[]> generator)
generator
function to allocate the returned array, as
well as any additional arrays that might be required for a partitioned
execution or for resizing.
This is a terminal operation .
A
- the element type of the resulting array
generator
- a function which produces a new array of the desired
type and the provided length
ArrayStoreException
- if the runtime type of the array returned
from the array generator is not a supertype of the runtime type of every
element in this stream
The
identity
value must be an identity for the accumulator
function. This means that for all
t
,
accumulator.apply(identity, t)
is equal to
t
.
The
accumulator
function must be an
associative
function.
This is a terminal operation .
While this may seem a more roundabout way to perform an aggregation compared to simply mutating a running total in a loop, reduction operations parallelize more gracefully, without needing additional synchronization and with greatly reduced risk of data races.
identity
- the identity value for the accumulating function
accumulator
- an
associative
,
non-interfering
,
stateless
function for combining two values
Optional<T> reduce(BinaryOperator<T> accumulator)
Optional
describing the reduced value,
if any. This is equivalent to:
boolean foundAny = false;
T result = null;
for (T element : this stream) {
if (!foundAny) {
foundAny = true;
result = element;
result = accumulator.apply(result, element);
return foundAny ? Optional.of(result) : Optional.empty();
but is not constrained to execute sequentially.
The
accumulator
function must be an
associative
function.
This is a terminal operation .
accumulator
- an
associative
,
non-interfering
,
stateless
function for combining two values
Optional
describing the result of the reduction
NullPointerException
- if the result of the reduction is null
reduce(Object, BinaryOperator)
,
min(Comparator)
,
max(Comparator)
<U> U reduce(U identity, BiFunction<U,? super T,U> accumulator, BinaryOperator<U> combiner)
The
identity
value must be an identity for the combiner
function. This means that for all
u
,
combiner(identity, u)
is equal to
u
. Additionally, the
combiner
function
must be compatible with the
accumulator
function; for all
u
and
t
, the following must hold:
combiner.apply(u, accumulator.apply(identity, t)) == accumulator.apply(u, t)
This is a terminal operation .
map
and
reduce
operations.
The
accumulator
function acts as a fused mapper and accumulator,
which can sometimes be more efficient than separate mapping and reduction,
such as when knowing the previously reduced value allows you to avoid
some computation.
U
- The type of the result
identity
- the identity value for the combiner function
accumulator
- an
associative
,
non-interfering
,
stateless
function for incorporating an additional element into a result
combiner
- an
associative
,
non-interfering
,
stateless
function for combining two values, which must be
compatible with the accumulator function
reduce(BinaryOperator)
,
reduce(Object, BinaryOperator)
<R> R collect(Supplier<R> supplier, BiConsumer<R,? super T> accumulator, BiConsumer<R,R> combiner)
ArrayList
, and elements are incorporated by updating
the state of the result rather than by replacing the result. This
produces a result equivalent to:
R result = supplier.get();
for (T element : this stream)
accumulator.accept(result, element);
return result;
Like
reduce(Object, BinaryOperator)
,
collect
operations
can be parallelized without requiring additional synchronization.
This is a terminal operation .
collect()
.
For example, the following will accumulate strings into an
ArrayList
:
List<String> asList = stringStream.collect(ArrayList::new, ArrayList::add,
ArrayList::addAll);
The following will take a stream of strings and concatenates them into a single string: String concat = stringStream.collect(StringBuilder::new, StringBuilder::append, StringBuilder::append) .toString();
R
- type of the result
supplier
- a function that creates a new result container. For a
parallel execution, this function may be called
multiple times and must return a fresh value each time.
accumulator
- an
associative
,
non-interfering
,
stateless
function for incorporating an additional element into a result
combiner
- an
associative
,
non-interfering
,
stateless
function for combining two values, which must be
compatible with the accumulator function
<R,A> R collect(Collector<? super T,A,R> collector)
Collector
. A
Collector
encapsulates the functions used as arguments to
collect(Supplier, BiConsumer, BiConsumer)
, allowing for reuse of
collection strategies and composition of collect operations such as
multiple-level grouping or partitioning.
If the stream is parallel, and the
Collector
is
concurrent
, and
either the stream is unordered or the collector is
unordered
,
then a concurrent reduction will be performed (see
Collector
for
details on concurrent reduction.)
This is a terminal operation .
When executed in parallel, multiple intermediate results may be
instantiated, populated, and merged so as to maintain isolation of
mutable data structures. Therefore, even when executed in parallel
with non-thread-safe data structures (such as
ArrayList
), no
additional synchronization is needed for a parallel reduction.
The following will classify
Person
objects by city:
Map<String, List<Person>> peopleByCity
= personStream.collect(Collectors.groupingBy(Person::getCity));
The following will classify
Person
objects by state and city,
cascading two
Collector
s together:
Map<String, Map<String, List<Person>>> peopleByStateAndCity
= personStream.collect(Collectors.groupingBy(Person::getState,
Collectors.groupingBy(Person::getCity)));
R
- the type of the result
A
- the intermediate accumulation type of the
Collector
collector
- the
Collector
describing the reduction
collect(Supplier, BiConsumer, BiConsumer)
,
Collectors
Optional<T> min(Comparator<? super T> comparator)
Comparator
. This is a special case of a
reduction
.
This is a terminal operation .
comparator
- a
non-interfering
,
stateless
Comparator
to compare elements of this stream
Optional
describing the minimum element of this stream,
or an empty
Optional
if the stream is empty
NullPointerException
- if the minimum element is null
Optional<T> max(Comparator<? super T> comparator)
Comparator
. This is a special case of a
reduction
.
This is a terminal operation .
comparator
- a
non-interfering
,
stateless
Comparator
to compare elements of this stream
Optional
describing the maximum element of this stream,
or an empty
Optional
if the stream is empty
NullPointerException
- if the maximum element is null
long count()
This is a terminal operation .
boolean anyMatch(Predicate<? super T> predicate)
false
is returned and the predicate is not evaluated.
This is a short-circuiting terminal operation .
predicate
- a
non-interfering
,
stateless
predicate to apply to elements of this stream
true
if any elements of the stream match the provided
predicate, otherwise
false
boolean allMatch(Predicate<? super T> predicate)
true
is
returned and the predicate is not evaluated.
This is a short-circuiting terminal operation .
true
(regardless of P(x)).
predicate
- a
non-interfering
,
stateless
predicate to apply to elements of this stream
true
if either all elements of the stream match the
provided predicate or the stream is empty, otherwise
false
boolean noneMatch(Predicate<? super T> predicate)
true
is
returned and the predicate is not evaluated.
This is a short-circuiting terminal operation .
true
, regardless of P(x).
predicate
- a
non-interfering
,
stateless
predicate to apply to elements of this stream
true
if either no elements of the stream match the
provided predicate or the stream is empty, otherwise
false
Optional<T> findFirst()
Optional
describing the first element of this stream,
or an empty
Optional
if the stream is empty. If the stream has
no encounter order, then any element may be returned.
This is a short-circuiting terminal operation .
Optional
describing the first element of this stream,
or an empty
Optional
if the stream is empty
NullPointerException
- if the element selected is null
Optional<T> findAny()
Optional
describing some element of the stream, or an
empty
Optional
if the stream is empty.
This is a short-circuiting terminal operation .
The behavior of this operation is explicitly nondeterministic; it is
free to select any element in the stream. This is to allow for maximal
performance in parallel operations; the cost is that multiple invocations
on the same source may not return the same result. (If a stable result
is desired, use
findFirst()
instead.)
Optional
describing some element of this stream, or an
empty
Optional
if the stream is empty
NullPointerException
- if the element selected is null
findFirst()
@SafeVarargs static <T> Stream<T> of(T... values)
T
- the type of stream elements
values
- the elements of the new stream
static <T> Stream<T> iterate(T seed, UnaryOperator<T> f)
Stream
produced by iterative
application of a function
f
to an initial element
seed
,
producing a
Stream
consisting of
seed
,
f(seed)
,
f(f(seed))
, etc.
The first element (position
0
) in the
Stream
will be
the provided
seed
. For
n > 0
, the element at position
n
, will be the result of applying the function
f
to the
element at position
n - 1
.
T
- the type of stream elements
seed
- the initial element
f
- a function to be applied to the previous element to produce
a new element
Stream
static <T> Stream<T> generate(Supplier<T> s)
Supplier
. This is suitable for
generating constant streams, streams of random elements, etc.
T
- the type of stream elements
s
- the
Supplier
of generated elements
Stream
static <T> Stream<T> concat(Stream<? extends T> a, Stream<? extends T> b)
StackOverflowException
.
T
- The type of stream elements
a
- the first stream
b
- the second stream