4

So yeah, I'm creating a programming language. And the language allows multiple threads. But, all threads are synchronized with a global interpreter lock, which means only one thread is allowed to execute at a time. The only way to get the threads to switch off is to explicitly tell the current thread to wait, which allows another thread to execute.

Parallel processing is of course possible by spawning multiple processes, but the variables and objects in one process cannot be accessed from another. However the language does have a fairly efficient IPC interface for communicating between processes.

My question is: Would there ever be a reason to have multiple, unsynchronized threads within a single process (thus circumventing the GIL)? Why not just put thread.wait() statements in key positions in the program logic (presuming thread.wait() isn't a CPU hog, of course)?

I understand that certain other languages that use a GIL have processor scheduling issues (cough Python), but they have all been resolved.

  • 3
    Your question seems to conflate the GIL and co-operative multithreading. These are distinct but related issues. – Winston Ewert Dec 17 '12 at 20:50
  • You're right, I meant co-operative multithreading. Depending on your definitions, my proposed language has both: Only one thread may execute at a time per process, and threads cannot be suspended unless thread.wait() is called. – Jonathan Graef Dec 17 '12 at 21:55
  • Yes, your language as described here has both. I answered under that assumption. –  Dec 17 '12 at 21:57
  • 2
    you may want to have a look at erlang – jk. Dec 18 '12 at 09:23

8 Answers8

7

You say parallelism is handled by multiple processes, but I would beg to differ. In-process parallelism is, depending on the program, much simpler and faster, due to (universal and by-default) shared memory.

When you don't have shared memory, you need a lot of extra engineering (like a common server process, with message passing to and fro) to have any shared state outside of what the OS provides for all processes (e.g. the file system). This also implies a lot of overhead for programs with a fair bit of shared state. Yes, you need locking most of the time, provided you don't have another mechanism (like STM). But in-process locking can be much simpler/faster than IPC, and actually accessing the data is much simpler: You just fricking do it, rather than doing IPC.

You also imply (I'm inferring this from your description of thread.wait) that thread are non-preemptive, i.e. it's up to each thread when its time slice ends. Two notes on this:

  • It's incompatible to some common uses of threads (e.g. running a computation with a timeout, or asynchronously even if it wasn't built to be async).
  • It's unlike the threads in many common languages, so expect newcomers to be confused and expect lots of code that simply doesn't ever wait.

On the other hand, I rather like the increase in determinism (if the scheduler is deterministic too, certain concurrency bugs ought to be much simpler to reproduce). It's a trade-off.

  • The idea that some code might forget to call thread.wait() is like saying some code in a traditional language might forget to synchronize itself. But you do make a good point: Newcomers to the language would have to get used to sticking thread.wait() into the middle of their long background processes. Timeouts aren't as much of a problem, since such processes can usually be run in a separate process. – Jonathan Graef Dec 17 '12 at 20:54
  • You're assuming the ability to run arbitrary code in other processes. It certainly doesn't work that way with CPython, and honestly I doubt anyone can make it work with all code. Did you implement a prototype (or at least did similar things in the past) or are still planning? –  Dec 17 '12 at 21:00
  • Still planning, unfortunately. But the language would be VM- or JIT-based, meaning program code (to create processes from) can be stored as data, and then selectively loaded into newly created processes as necessary. – Jonathan Graef Dec 17 '12 at 21:05
  • You may want to look into why CPython does not do that (at least code objects are immutable). And there are probably many potential issues atop of that. I'm no expert, I've just seen a lot of weird multiprocessing errors. –  Dec 17 '12 at 21:07
  • I am aware that loading arbitrary code at runtime can be inefficient and a security risk. But if supported properly by the language, it can also be bug-free and very useful. This is a design decision that I'm not willing to give ground on. – Jonathan Graef Dec 17 '12 at 21:13
  • That's not what I'm thinking of. Note that CPython does not share code objects, or anything else for that matter. It imports all modules again -- can you tell me why? (I have vague ideas but nothing cold and hard.) Please, at least have a throughout look at existing attempts (cough Python multiprocessing cough). –  Dec 17 '12 at 21:15
  • @delnan, that depends on the Operating System you are running. On *nix it fork()s the process and doesn't reimport the modules. Windows lacks a fork, so it actually launches a whole new interpreter. – Winston Ewert Dec 17 '12 at 21:25
  • @WinstonEwert I am aware, but I tend to ignore it, because I consider that an optimization and because relying on it may mean breaking Windows supports. I consider that unacceptable, so I work with this model in mind, though it's actually faster on some platforms. –  Dec 17 '12 at 21:29
  • @delnan, ok, I took your previous statement to imply that you didn't know why CPython reimports the modules. Agreed that relying on fork() being used would be a mistake. – Winston Ewert Dec 17 '12 at 21:31
3

Firstly, your question seems confused about the relationship between a GIL and cooperative multithreading. Co-operative multithreading is when the the current thread continues to execute until it gives up execution. The greenlet library for python is based on this model. It simplifies coding in many cases because you don't have to worry about context switches except at specific points.

A global interpreter lock is a lock which prevents more than one thread from executing code inside the virtual machine at once. Execution will still jump from thread to thread without the thread requesting it. The times that switches will happen is limited (exactly how depends on the language implementation). But you don't really the simplification that the co-operative multitasking gives you.

Your question actually appears to be asking about whether you can get away with co-operative multitasking with multi-process options. Of course strictly speaking you can still do whatever you want without pre-emptive multithreading, the question is whether it'll be easier/more efficient with it.

I use co-operative multithreading and process parallelization whenever I can. Most of the time I find that works beautifully and is a simpler approach then would be required if I were to try and use threads. But I think there are some cases where this falls down.

Let's consider some examples:

1) Worker Thread and UI Thread

Its not uncommon to have a long running task executed in an application while a progress bar is displayed. In order to keep things working we need to execute UI events as well as continue running the task. Normally, we'd have the task executing in a separate thread. But if the threads are co-operative, this won't work because the task won't normally have any reason to delay itself.

So what can we do?

  1. In some cases, there were be natural pause points in the long running task. There could be File I/O, database calls, sockets reads. All of these naturally block, and if your language automatically thread waits at these points, many long running tasks may yield naturally.
  2. The task could be moved into another process. But for some tasks, this will take a lot of effort. I may have to ship a lot of data to the subprocess and then let it process and ship the data back.
  3. You could introduce explicit thread yield calls. The disadvantage here is that you are doing something manually that other languages do automatically.

2) Serving many requests

A system such as a database or a web server may need to serve requests coming from many different external systems. In doing so it will have to navigate in-memory data structures and multiple requests may require the same data structures. Typically, we might implement this using multiple threads and using locks on the data structure to make sure nobody changes it while it is being read.

As long as we only have on core operating, co-operative multithreading actually works great. But when you've got multiple cores, you can't take advantage of it that way. You could introduce multiple processes. But the trouble there is that you can't really share in-memory data structures all that well. I'm sure you can work around it with IPC techniques, but I think it'll always be awkward compared to the locks model.

Winston Ewert
  • 24,862
  • I'll look into the difference between a GIL and co-operative multithreading and probably edit the question. +1 for pointing out the obvious "background threads must wait." But why can't co-operative multithreading use locks? Do you think it would make the code more difficult to write? – Jonathan Graef Dec 17 '12 at 21:47
  • 1
    @JonathanGraef, of course co-operative mulithreading can use locks. My point was that separate processes cannot share data structures which would normally be controlled by locks in a multithreaded app. – Winston Ewert Dec 17 '12 at 21:52
  • Nothing is awkward with a good library. :) I don't know much about server-side software though, so I can't comment on the performance issues. This language is primarily intended for personal computer apps and services. – Jonathan Graef Dec 17 '12 at 22:16
  • @JonathanGraef, but a good library depends on having the language features to make it easy to use. If your language isn't flexible enough, a good library can't cover the holes. I think you probably can get away with just having co-operative multithreading. I use true threads rarely, and usually just to work around something that doesn't support the co-operative multithreading. – Winston Ewert Dec 18 '12 at 16:24
2

I've designed a number of languages and come to realize there are different needs for threads or parallelism.

One is raw performance. If a program is CPU-bound and there is a way to parallelize it across multiple cores then clearly one would want that.

The other, and far more common need, in my experience, is to simplify the representation of sequences of activities where those sequences are, in some sense, independent of each other.

The obvious case is listening on multiple input streams and responding to requests on those streams.

A case I work with a lot has to do with pharmacologic modeling, where a subject may get one medication on one schedule, another on another, and have observations on yet another schedule, while simultaneously dealing with adverse events and alarms. Treating this as a purely event-driven process is possible, but the coding is very clumsy.

In these cases, what is needed is a way to clearly express what one is trying to say, and it has little to do with being CPU-bound and needing performance. For these needs, non-preemptive parallelism works better, and is much less prone to race conditions.

Mike Dunlavey
  • 12,835
1

Here's a quick answer - one thread for the UI and one for asynchronous data processing work. So yes, there's a need for true multi-threading. Massive parallel computations are another obvious answer that you alluded to.

From a broader point of view, why wouldn't you offer the capability? Many, many devices have multiple cores available to handle processes. Why would you arbitrarily limit the ability of any program written in the language you're creating?

  • The first use case is concurrency (which he does offer), not parallelism. Python programmers do that too, despite the GIL. –  Dec 17 '12 at 20:43
  • The language does offer the ability: It is super easy to spawn multiple processes to do background work. And if you really must have your background threads in the same process as your UI, why can't they just call thread.wait() every few milliseconds to allow the UI thread to run? – Jonathan Graef Dec 17 '12 at 20:47
  • @JonathanGraef - why impose that restriction upon the developer? IMO, the goal of the language is to make the resources of the system easier to use. Looping and throwing a wait() call every now and then is unnecessary work for the developer. –  Dec 17 '12 at 20:50
  • Synchronization is more work for the developer. I claim throwing in thread.wait() calls is easier. Do you disagree? (and why) – Jonathan Graef Dec 17 '12 at 20:58
  • 2
    @JonathanGraef Note that one still needs synchronization when any function one calls in a critical section may wait. For example, global_x = foo(global_x) is atomic iff foo is atomic. –  Dec 17 '12 at 21:03
  • Yes, you make an excellent point. It would be considered bad practice to put arbitrary wait() calls in, say, library functions. If a task is expected to take some time, it should create its own thread and call a callback when completed, rather than blocking. – Jonathan Graef Dec 17 '12 at 21:08
  • That's not the issue. The issue goes deeper: this approach (not unlike locking) simply doesn't compose, i.e., two functions which are safe in their own right may not be safe when put together (for example, when f can and should wait according to its intended use, the combination wants to chain f and g atomically). Prohibiting yielding everywhere doesn't help, it just makes cooperation harder (and puts all the burden on the programmer gluing stuff together). Making APIs async is an "easy" way out but now you need great tools for working with them, to maintain the simplicity. –  Dec 17 '12 at 21:12
  • I'm not sure what you're saying. Most functions take pretty much no time to execute, so they don't need to wait. But there are two potential gotchas: If the program is run on a slow computer, it will be slow (suprise). Also, if a process is expected to take a great deal of time, it should not block, but instead spawn its own thread which can wait to its heart's content. As long as developers are aware of these "best practices," there shouldn't be any problem. – Jonathan Graef Dec 17 '12 at 21:26
  • (1) That's a pretty freaking fragile model (on par with manual locking, I'd say) if is not immediately visible which calls may wait, all the way up. If it is, congrats, all your code just go uglier. (2) As I said, async APIs solve this issue but raise others. Even when done very well, they complicate all code involved. (3) Frequently giving up control is required for responsiveness, even if each individual function runs for very short time frames. I imagine it would be hard to add enough waits in the right places (I certainly wouldn't know where and how to do it in my code). –  Dec 17 '12 at 21:34
  • You make a good point that having waits in the wrong places can cause synchronization issues. You're also correct in that not putting waits in fast functions can cause a little bit of UI lag, but I know from using single-threaded languages that the lag isn't really noticeable. A notable example of a language that does it this way is ActionScript (a.k.a Flash), which doesn't support any form of multithreading. It is pretty responsive, but not perfect like native UI components are. – Jonathan Graef Dec 17 '12 at 23:27
  • @JonathanGraef - "but I know from using single-threaded languages that the lag isn't really noticeable" ... for the applications that you've written. That's not a dig, it's just a statement that it's held true for what you've done. But you're asking if there are other applications outside the realm of what you've done that could use true multithreading. You've received several persuasive answers stating that "yes, there are." –  Dec 18 '12 at 12:30
1

In a world where CPU's can have multiple "cores" and each "core" can work on a separate thread then having multiple independent threads can be a useful feature.

In a world where "events" such as data arriving from the network need to be processed "now" and not when another thread says "I'm done for now" then it may be useful to have multiple independent threads.

Is it really "parallel" processing if only one task is being worked on at a time?

The better question is this: why, why are you doing this?

If it's a domain specific language then you shouldn't care what we think, you know the domain better.

If it's a general purpose language, what makes it distinct enough to be recognized among the many already existing (besides adding this limitation)?

Andrei
  • 131
  • RE reasons: Possible reasons include simplicity of implementation, somewhat better performance of single-threaded code, some weak atomicity guarantees (the last one is actually a downside depending on your POV, but some like it). –  Dec 17 '12 at 20:50
  • @delnan I don't think having to pepper the code with wait is "simple". Why would single threaded code work better, if there is only one thread then isn't this discussion about threading irrelevant? – Andrei Dec 17 '12 at 20:54
  • As someone noted in a comment on the question, the cooperative multitasking thing is independent of the GIL. CPython, for example, has preemptive multitasking but also a GIL. As for single-threaded code: I didn't say single-threaded code is "better". I said having no synchronization at all in the implementation (except a single lock that's only touched a few times per second) makes single-threaded code faster because lots of unnecessary (for single threaded code) locks go away. The alternative is compiling a separate interpreter without locks and threads, but that creates other headaches. –  Dec 17 '12 at 20:58
1

After analyzing other people's answers, I believe I now understand why simultaneous multithreading is important.

There are two main reasons why a general-purpose language should have real, preemptive multithreading support:

  1. Sometimes developers want to use real threads in their programs. 'Nuff said.

  2. Real threads are necessary to allow memory to be shared across threads running on multiple processor cores. This feature is essential for performance optimization in some cases, and in other cases it just makes the code easier to write.

However, in cases where memory-sharing is less important, cooperative multithreading programs are just as easy to write and far easier to debug, as long as the developer understands how to use the system.

So the answer is: Yes, true multithreading really is necessary in some cases, but only for certain performance optimizations and accessibility to developers who prefer it. Otherwise, cooperative ("fake") multithreading is the best solution.

Thanks for all the input guys, I might not have figured this out without you. I'm going to include both threading schemes in my language. We'll see how well that goes.

1

If there is no shared mutable state then it makes life a lot simpler because you don't need to synchronise between processes. At that point you have an actor model as in Erlang.

The other problem you have is that if you are counting on the programmer to call thread.wait() or the like then you have the possibility of a misbehaving program bringing down the whole system.

I would suggest you take a good look at Both Erlang and Clojure as they both have some of the aspects of what you are looking for.

Zachary K
  • 10,413
  • 2
  • 38
  • 55
0

these day if you want more speed for calculations you need to go concurrent, especially when you have more cores to use

to give an example: most GUI frameworks have a single thread that manipulates the screen and handles the user events, but the handling of these events must finish quickly or the entire GUI will hang

if you want to do any IO you could use async io and call backs

you language seems to rely on cooperative multithreading which requires all the code in the application to cooperate (or at least don't block indefinitely), preemptive multithreading is comparatively easier to code for (with greater pitfalls though)

ratchet freak
  • 25,876
  • Concurrency != parallelism. And as the question explains, threads aren't the only option for parallelism. –  Dec 17 '12 at 20:53