# Some very simple parallelism using spawn # defmodule Tsk do @moduledoc """ A module wrapper for the tasks. """ @doc """ Compute the sum of some number tail-recursively. This function behaves well until you go from 1000000000 to 2000000000. Before that, the function scaled linearly. On this just it is MUCH worse than linear. What happens? Is it just handling of big integers? Serial testing with three reps: 1000000000 elixir task.ex 9.50s user 0.16s system 96% cpu 9.968 total 2000000000 elixir task.ex 112.62s user 0.54s system 98% cpu 1:54.35 total Parallel testing with unlimited threads (on my mac -- 8 cores) and doing the task 10 times elixir task.ex 31.06s user 0.24s system 588% cpu 5.316 total """ def trsumm(0, res,sv) do IO.puts("#{res} #{sv}") res end def trsumm(v, res,sv) do trsumm(v-1, res+1,sv) end @doc """ Given a list of PIDs, do not end until all of the PIDs are finished. Basically does a very small tail-recursion with a short sleep between checks to determine if any PIDs are alive """ def alive(tasks) do Enum.reduce(tasks, false, fn (pid, val) -> val or Process.alive?(pid) end) |> if do :timer.sleep(50) alive(tasks) end end @doc """ Test using threads. Use Enum.map to start threads spawn is kind of awkward as it takes a function spawn returns a PID so the Enum.map gets a set of PIDs """ def main() do tasks = Enum.map(1..24, fn sv -> spawn(fn -> Tsk.trsumm(1000000000,0,sv) end) end) alive(tasks) end end Tsk.main()