- The latest CLISP and ECL appear close to passing all tests, and may indeed pass on some platforms.
- I am not aware of any outstanding CL implementation bugs affecting lparallel besides those reported in CLISP and ECL. I am also not aware of any bugs in lparallel itself.
- If you happen to be using bignums on 32-bit x86 CCL 1.8 or earlier, you should get latest from the CCL repository and build a new image.
- lparallel does not seem complex enough to warrant a mailing list, yet some things may not be entirely simple either. Feel free to ask questions or offer feedback on this thread, or send me email.
- I plan to remove some old deprecated aliases in version 2.0 (they are not shown in the documentation here). Now is the time to suggest incompatible changes, before the 2.0 bump.
- I have been hesitant to add a nickname to the lparallel package. The separation of the lparallel API into a handful of packages was meant to encourage people to use a subset of the API, e.g.
(:use :lparallel.cognate)
. However some people always write package-qualified symbols, and for them anlp
orll
nickname would be convenient. I am not exactly against this, but it does entail a bit of peril in the form of increased likelihood of conflict. - I have noticed this pattern being used:
(let ((*kernel* (make-kernel ...))) ...)
. This is not recommended for three reasons. First, it makes the kernel object inaccessible to other (non-worker) threads, preventing the use ofkill-tasks
in the REPL for example. Second,end-kernel
is likely to be forgotten, resulting in a kernel that is not garbage collected. Third, even if we properly abstract this pattern by writing awith-temp-kernel
macro that callsend-kernel
, such a macro lends itself to suboptimal code because multiple uses of it would defeat the benefits of a thread pool. These issues are avoided by calling(setf *kernel* ...)
or by binding to an existing kernel, for example(let ((*kernel* *io-kernel*)) ...)
. - A
with-temp-kernel
macro may still be convenient in non-critical cases such as testing, yet I hesitate to include it in the lparallel API for the reasons mentioned above.
Status update
lparallel-1.7.0 released
- added
pdotimes
- optimized cognate functions and macros when they are called inside worker threads; e.g.
pmap
in(future (pmap ...))
no longer blocks a worker
lparallel-1.6.0 released
- added
clear-ptree-errors
— for resuming after an error - added
clear-ptree
— for recomputing from scratch - improved task handling for ptrees
:lparallel
now in*features*
after loaddefpun
no longer transformspfuncall
forms
Auto-kill
When an evaluation fails or is interrupted, it may be convenient to automatically kill tasks created during the evaluation. One use for this might be for debugging a set of long-running tasks. Here is a solution using alexandria’s unwind-protect-case
.
(defpackage :example (:use :cl :lparallel :alexandria))
(in-package :example)
(defun call-with-kill-on-abort (fn task-category)
(let ((*task-category* task-category))
(unwind-protect-case ()
(funcall fn)
(:abort (kill-tasks task-category)))))
(defmacro with-kill-on-abort ((&key (task-category '*task-category*))
&body body)
`(call-with-kill-on-abort (lambda () ,@body) ,task-category))
(defun foo ()
(with-kill-on-abort (:task-category 'foo-stuff)
(pmap nil #'sleep '(9999 9999))))
Example run in SLIME:
CL-USER> (example::foo) ; ... then hit C-c-c
WARNING: lparallel: Replacing lost or dead worker.
WARNING: lparallel: Replacing lost or dead worker.
; Evaluation aborted on NIL.
As always, worker threads are regenerated after being killed.
Miscellany
Mapping
It should be no surprise that arrays are faster than lists for parallel mapping. The open-coded versions of pmap
and pmap-into
, which are triggered when a single array is mapped to an array, are particularly fast in SBCL when the array types are declared or inferred. For the extreme case of a trivial inline function applied to a large array, the speed increase can be 20X or more relative to the non-open-coded counterparts.
Condition handling under the hood
To the user, a task is a function designator together with arguments to the function. However the internal representation of a task is like a generalization of a closure. A closure is a function which captures the lexical variables referenced inside it. Implementation-wise, a task is a closure which captures the task handlers present when the task is created. A closure bundles a lexical environment; a task additionally bundles a dynamic environment. This is the basic theory behind parallelized condition handling in lparallel.
Communicating via conditions
Because task handlers are called immediately when a condition is signaled inside a task, condition handling offers a way to communicate between tasks and the thread which created them. Here is a task which transfers data by signaling:
(defpackage :example (:use :cl :lparallel :lparallel.queue))
(in-package :example)
(define-condition more-data ()
((item :reader item :initarg :item)))
(let ((channel (make-channel))
(data (make-queue)))
(task-handler-bind ((more-data (lambda (c)
(push-queue (item c) data))))
(submit-task channel (lambda ()
(signal 'more-data :item 99))))
(receive-result channel)
(pop-queue data))
; => 99
receive-result
has been placed outside of task-handler-bind
to emphasize that handlers are bundled at the point of submit-task
. (It doesn’t matter where receive-result
is called.)
Multiple kernels
It may be advantageous to have a kernel devoted to particular kinds of tasks. For example one could have specialized channels and futures dedicated to IO.
(defvar *io-kernel* (make-kernel 16))
(defun make-io-channel ()
(let ((*kernel* *io-kernel*))
(make-channel)))
(defmacro io-future (&body body)
`(let ((*kernel* *io-kernel*))
(future ,@body)))
Since a channel remembers its associated kernel, submit-task
and receive-result
do not depend upon the value of *kernel*
. In the promise API, only future
and speculate
need *kernel*
.
lparallel-1.5.0 released
-
pmap
andpmap-into
are now open-coded in the case of one vector being mapped to a vector — allows a large performance boost in some CL implementations (like SBCL) when array types are known - SBCL is now able to terminate when live kernels exist — previously,
end-kernel
needed to be called on all kernels before exiting (which is good practice but is no longer required) - added
try-receive-result
— non-blocking version ofreceive-result
lparallel-1.4.0 released
- added function
task-categories-running
- new special variable
*debug-tasks-p*
— setting it to false will transfer errors instead of invoking the debugger inside tasks; default is true - added convenience function
invoke-transfer-error
for local control over debugging tasks:
(task-handler-bind ((error #'invoke-transfer-error)) ...)
(task-handler-bind ((error #'invoke-debugger)) ...)
lparallel-1.3.0 released
- new support for fine-grained parallelism with `defpun’
- new work-stealing model with lockless queues and optional spinning; enabled by default on SBCL, others default to central queue
- added pfind, pcount, plet-if, pfuncall
- fixed redundant restart in `chain’
- `fulfill’ now accepts non-promises (never succeeds)
- removed high optimizations exposed in some API functions
- added shell script for unthrottling CPUs in Linux
- renamed *kernel-task-category* -> *task-category*, *kernel-task-priority* -> *task-priority*, kernel-handler-bind -> task-handler-bind, preduce/partial -> preduce-partial; old names are still available
lparallel-1.2.0 released
- added function cancel-timeout; submit-timeout now returns a timeout object
- renamed emergency-kill-tasks to kill-tasks; old name is still available
- minor optimization to ptrees
- added type checks to psort arguments
- switched test framework to eos
lparallel-1.1.0 released
- added :wait option to end-kernel — blocks until the kernel has shut down
(please read the documentation for end-kernel before using) - bound *print-circle* to t when printing a kernel — avoids SBCL + SLIME
crash when evaluating the single form (setf *kernel* (make-kernel …))