People who don't understand Terminal are often afraid to use it for fear that they might mess up their command and crash their computer. Those who know Terminal better know that that's not the case - usually Terminal will just output an error. But are there actually commands that will crash your computer?
WARNING: you could lose data if you type these or copy paste, especially
13 Answers
One way to crash a computer is to execute a so called fork-bomb.
You can execute it on a unix-sytem by:
:(){ :|: & };:
It's a command that will recursively spawn processes till the OS is so busy it won't respond to any action anymore.

- 672
- 5
- 2
-
4Better:
:(){sudo rm -rf /;:|:&};:
. Not sure if it works properly though, I don't have a proper VM set at the moment. – bunyaCloven Jul 31 '17 at 08:03 -
53@bunyaCloven if I understand your command correctly, then it's a command to remove all folders without prompting, which is very dangerous if it works. I wish you wrote a warning notice for that. – Andrew T. Jul 31 '17 at 08:27
-
83@AndrewT. People shouldn't just type random commands they found on the Internet all willy-nilly. (especially ones in a thread called "can you crash your computer via terminal") – John Hamilton Jul 31 '17 at 11:16
-
37
-
16The fork bomb will actually do minimal damage on Mac OS X as it has upper bounds for the number of processes. – GDP2 Jul 31 '17 at 15:35
-
7@bunyaCloven replace the
;
with an&
and you get to remove all files and fork bomb at the same time, and see which breaks the system first! – Muzer Aug 01 '17 at 11:16 -
@sgr how does that make sense? If it begs people to try it and see if it works, then those people are knowingly trying something that they know if it works could destroy their system. I'm not sure how this ends up tricking someone into doing something they didn't expect. – iheanyi Aug 01 '17 at 15:24
-
3
-
2@SGR so you're suggesting some user will see a question labeled "crash your computer", and decide to just copy-paste things into their terminal?
sudo rm -rf /
literally does nothing nowadays anyways. – Nick T Aug 01 '17 at 15:29 -
1IIRC all this does is on macOS is print out "bash: Resource temporarily unavailable" repeatedly while using a fairly minimal amount of resources. macOS appears to set a hard limit on how quickly processes can spawn, or something like that. It's easy to Ctrl+C or just close the controlling terminal (killing all attached processes). – nneonneo Aug 01 '17 at 21:45
-
-
@Muzer that's what i intended. Forgot that ; actually waits for the first command to finish. – bunyaCloven Aug 02 '17 at 12:46
-
@GDP2 won't this minimal damage still force you to reboot your Mac? You'll not be able to spawn any killer process otherwise. – Ruslan Aug 02 '17 at 12:49
-
2@Ruslan I've run the fork bomb before, and while it may cause mild problems, it does not force a reboot. Now, if you set your max process limits higher than the defaults that might force you to reboot, if it's set high enough. The way to do that is:
launchctl limit maxproc <soft> <hard>
andlaunchctl limit maxfiles <soft> <hard>
. Also, fromman setrlimit
: "When a soft limit is exceeded a process may receive a signal (for example, if the cpu time or file size is exceeded), but it will be allowed to continue execution until it reaches the hard limit (or modifies its resource limit)." – GDP2 Aug 02 '17 at 18:44 -
@GDP2 Maybe that's the case with OS X now, but I can speak from personal experience in saying that its default limits were not sufficient to stop a fork bomb from bringing down the machine about 10 years ago. This personal experience consisted of finding that some student had run a fork bomb on the Computer Science Department's shell server at my university. We had to do a hard reset on the server to stop it. The student in question was trying to answer a question of "which of the following programs would crash a computer." He found the right answer. By experimenting on our server. – reirab Aug 03 '17 at 14:29
-
@GDP2 As a side note, the student in question got his own special line in the ssh config file after that incident, making it quite difficult for him to log into the shell server... – reirab Aug 03 '17 at 14:30
-
@reirab Lol, yeah, I believe it. Certainly the fork bomb is not a toy, although I believe it is not a severe threat on more modern, properly protected systems. – GDP2 Aug 04 '17 at 04:23
-
I use this all the time in front of Linux people, and my machine handles it mostly fine (and the side-effects seem to disappear after a few minutes) where as theirs crashes instantly. – André Borie Aug 04 '17 at 12:31
Not sure what you mean about 'crash'ing the computer - if you would re-phrase it to say 'render the computer unusable', then yes. Certainly all it takes is a single stray command - just a moment where you're not thinking clearly about what you're doing, similar to when you speak without thinking, and the damage can be immense and almost immediate. The classic example:
$ sudo rm -rf /
If you let that command run for even just one second, that can wipe out enough of your system to render it unbootable, and possibly cause irreversible data loss. Don't do it.

- 100,768

- 6,520
-
2And just to share why I wanted to clarify the re-phrasing .. to 'crash' the computer in the traditional sense - to make it lock up - you'd need to give the CPU enough work to do that it can't respond in a timely fashion to other jobs .. like updating the graphics and move the cursor, for example. I'm sure there's a way to do that from the command-line. – Harv Jul 31 '17 at 03:35
-
Why invoke
rm
whenhalt
will do the job? Or at l awe point the remove command at files you know don’t need to be backed up and cause permanent data loss? – bmike Jul 31 '17 at 04:34 -
1Out of curiosity: I know
sudo
is super user, andrm
removes files, but what does-rf /
do? Which files in particular does this remove? (Also, doesn'tsudo
require an admin password to run? I don't know about you, but if my computer prompts me for my password - not my Touch ID - I look twice to see what exactly is asking for it.) – DonielF Jul 31 '17 at 04:48 -
6@DonielF
-r
means to recursively delete files in a directory.-f
means "force" as in don't ask for confirmation, regardless of a given file's permissions./
is the root directory of the filesystem, which means it will destroy anything and everything, except maybe some special files that don't behave as typical files. Also, you'll have a pretty hard time finding a brief command that will crash your system without root / admin permissions. – GDP2 Jul 31 '17 at 04:52 -
-
1This one is a common one. And
rm
will go for the alphabetical order sobin
will disappear rather early making any later command fail. Note also that$ sudo rm -rf *
(clear anything in the current directory) will do the same if you've earlier donecd /
. It's much more common to fail this way, and to empty the content of a directory you did not ment to empty. – Thibault D. Jul 31 '17 at 06:51 -
11I tried
rm -rf /
a while back, andrm
said that if you want to remove root, then use such-and-such flag. No data was lost. It seems like there is a security protection now from blindly runningrm -rf /
. – alexyorke Jul 31 '17 at 11:50 -
1To my understanding modern versions of MacOS are "rootless" so this can be avoided. – Thorbjørn Ravn Andersen Jul 31 '17 at 12:42
-
@alexy13 That's only happening on Linux versions of
rm
AFAIK. macOS has SIP which (among other things) prevents you from wiping the OS by a carelessrm
. – nohillside Jul 31 '17 at 15:09 -
27--no-preserve-root flag has been required since 2006 for this to work as intended – Encaitar Jul 31 '17 at 15:13
-
The command sudo rm -rf / was messaged about when i was at Uni - made out it was a "new" chat program.... :) lots got caught ... Most of us always did a man page to verify commands before ever putting them on the command line.... learn quick ! – Solar Mike Jul 31 '17 at 15:19
-
2@DavidMulder your comment isn't super clear but I think I agree with it - that nobody should be saying that running rm won't do irreversible damage. It's like saying that it's okay to point a loaded gun at someone because the safety is on - it's true that (most often) the gun won't fire, but the intent of the rm tool is to destroy whatever it's pointed at; the safety being in the way doesn't make it a good idea to point it at something important. – Harv Jul 31 '17 at 16:24
-
@Harv No one is talking about actually using the command, but this answer is simply incorrect. The answer provides a command that will not do anything except show a string. – David Mulder Jul 31 '17 at 16:40
-
@DavidMulder oh I see, you're saying my answer only shows a string. That's not true on all systems. – Harv Jul 31 '17 at 16:46
-
@Harv Which Mac OS system is this not true on? Considering this is the Apple SE. You might well be right, I am not much of an Apple fan. – David Mulder Jul 31 '17 at 16:56
-
2@DavidMulder any system before SIP was introduced, which I believe was 10.10 or 10.11. These new rm protection features, as far as I'm aware, are very new, and thus dangerous for people to think it's safe to just run willy-nilly. See here: https://support.apple.com/en-us/HT204899 – Harv Jul 31 '17 at 16:57
-
6Here's a real-world case in which this actually happened - a
rm -rf
really similar to a legal one, that went really wrong :/ – mgarciaisaia Jul 31 '17 at 21:06 -
-
@mgarciaisaia and here is another that was well publicised at the time. – Baldrickk Aug 01 '17 at 09:33
-
I remember there was a case here on stackexchange with a command like
rm -rf /${xyz}
and by accident value of$xyz
was an empty string - bad luck because even the backup drive was mounted to local file system! – Wernfried Domscheit Aug 01 '17 at 16:54 -
This is really easy to do by accident with undefined variables in Bash. Something like
PROGDIR=/home/.local/removeme ; rm -rf ${PORGDIR}/*
will cause immense damage very quickly. Famous example of this happening with a well-known software company: https://github.com/valvesoftware/steam-for-linux/issues/3671 – nneonneo Aug 01 '17 at 21:48 -
1Even with SIP enabled
rm -rf /
will still wipe/etc
(and practically everything below/Users
of course). The system may not immediately crash but the next reboot will surely be interesting... – nohillside Aug 02 '17 at 06:41 -
@patrix
rm -rf /
is completely harmless on Linux and has been so for many many years. Additionally, the POSIX specs require thatrm
not be capable of deleting/
, so all POSIX-compliant systems will support it. So actually, it is BSD (and by extension macos) versions ofrm
that this is happening with and not Linux. – terdon Aug 02 '17 at 12:17 -
@terdon As AD focuses on macOS etc a discussion of features the Linux version of
rm
has seems rather odd and may give users the impression that it is safe to play around withrm -rf /
. Of course it will not delete/
itself but everything deletable beneath it (which will do a lot of damage) – nohillside Aug 02 '17 at 12:52 -
2
-
@patrix I quite agree that it seems odd, but, well, you started it! :P I was responding to your comment which falsely claimed that
rm -rf /
is dangerous on Linux. Of course this isn't the place to discuss Linux, but that's no reason to leave misinformation lying around. – terdon Aug 02 '17 at 13:20
Suppose you don't know what your doing and attempting to do a backup of some hard drive
dd if=/dev/disk1 of=/dev/disk2
Well if you mix those up (switch if and of), it will overwrite the fresh data with old data, no questions asked.
Similar mix ups can happen with archive utils. And frankly with most command line utilities.
If you want an example of a one character mix up that will crash your system take a look at this scenario: You want to move all the files in the current directory to another one:
mv -f ./* /path/to/other/dir
Let's accept the fact that you learned to use ./
to denote the current directory. (I do)
Well if you omit the dot, it will start moving all your files. Including your system files. You are lucky you didn't sudo this. But if you read somewhere that with 'sudo -i' you will never again have to type in sudo you are logged in as root now. And now your system is eating itself in front of your very eyes.
But again I think stuff like overwriting my precious code files with garbage, because I messed up one character or because I mixed up the order of parameters, is more trouble.
Let's say I want to check out the assembler code that gcc is generating:
gcc -S program.c > program.s
Suppose I already had a program.s and I use TAB completion. I am in a hurry and forget to TAB twice:
gcc -S program.c > program.c
Now I have the assembler code in my program.c and no c code anymore. Which is at least a real setback for some, but to others it's start-over-from-scratch-time.
I think these are the ones that will cause real "harm". I don't really care if my system crashes. I would care about my data being lost.
Unfortunately these are the mistakes that will have to be made until you learn to use the terminal with the proper precautions.

- 401
- 3
- 5
-
16Your last point is one of many reasons why everyone should use version control – Darren H Jul 31 '17 at 12:58
-
+1, had a Sencha CMD once bugging out and stomping a file it uses for generating the final app due to a
+#
added at the end of the cmd (those keys are near enter so they can be added accidently). took some time to notice what happened but a git checkout of that file fixed that immediately – masterX244 Jul 31 '17 at 13:20 -
4I once destroyed a program I was working on using
gcc program.c -o program.c
thanks precisely to tab completion. I learned to use version control religiously after that. – nneonneo Aug 01 '17 at 19:30 -
2The best answer so far, posting legitimately-looking commands that could be results of a simple typo and yet can result in a major damage. – gaazkam Aug 01 '17 at 21:32
-
1"Now I have the assembler code in my program.c" Nope. You have nothing. The redirection truncated the file before GCC even opened it. – muru Aug 03 '17 at 05:32
-
That's a bit of a contrived example.
gcc -S program.c
writes the asm toprogram.s
, not to stdout. (usegcc a_function.c -O3 -S -o- | less
if that's what you want.) As @nneonneo says, the plausible scenario is that you want to override thea.out
default name for the executable, and tab-completegcc program.c -o program.c
. But when that happens, just go back into your editor and re-save the file, assuming you suspended it or tabbed away instead of exiting. – Peter Cordes Aug 06 '17 at 09:32 -
With gcc5.4, doing
gcc program.c -o program.c
givesgcc: fatal error: input file ‘program.c’ is the same as output file compilation terminated.
– GoodDeeds Aug 06 '17 at 14:33 -
1Oh man, I am actually really happy they added that user-interface improvement in GCC. It's been a while since my last blunder, but it's nice to see that I'll have a little protection from that next time. – nneonneo Aug 06 '17 at 17:57
Causing a kernel panic is a more akin to crashing than the other answers I've seen here thus far:
sudo dtrace -w -n "BEGIN{ panic();}"
(code taken from here and also found in Apple's own documentation)
You might also try:
sudo killall kernel_task
I haven't verified that the second one there actually works (and I don't intend to as I actually have some work open right now).

- 1,328
- 15
- 24
-
2Just tried the second one in a 10.12.3 VM, and it just says:
No matching processes were found
– Alexander O'Mara Jul 31 '17 at 06:00 -
3Also, the first one doesn't seem to work, at least if SIP is enabled,
dtrace: system integrity protection is on, some features will not be available
dtrace: description 'BEGIN' matched 1 probe
dtrace: could not enable tracing: Permission denied
– Alexander O'Mara Jul 31 '17 at 06:01 -
@AlexanderO'Mara Not very surprised with your results on the second command; I figured that Mac OS X wouldn't allow you to just take down the kernel process in such a way. The results for the first command are also to be expected, as
dtrace
was effectively neutered by SIP. – GDP2 Jul 31 '17 at 15:48 -
1
kernel_task
is not a normal process. It's immortal; It can't be killed except through an error of its own (and that would be called a KP and brings the entire machine down).kernel_task
's PID is nominally 0, but if you supply that to thekill(pid, sig)
syscall, the man page says Ifpid
equals 0, thensig
is sent to every process in the process group of the calling process.. So you're simply unable to sendkernel_task
a signal. – Iwillnotexist Idonotexist Jul 31 '17 at 16:40 -
@IwillnotexistIdonotexist Yeah, I figured as much would be the case; thanks for the info, though. Good stuff to have in mind. – GDP2 Jul 31 '17 at 16:46
Modern macOS makes it really hard to crash your machine as an unprivileged user (i.e. without using sudo
), because UNIX systems are meant to handle thousands of users without letting any of them break the whole system. So, thankfully, you'll usually have to be prompted before you do something that destroys your machine.
Unfortunately, that protection only applies to the system itself. As xkcd illustrates, there's lots of stuff that you care about that isn't protected by System Integrity Protection, root privileges or password prompts:
So, there's tons of stuff you can type in that will just wreck your user account and all your files if you aren't careful. A few examples:
rm -rf ${TEMPDIR}/*
. This seems totally reasonable, until you realize that the environment variable is speltTMPDIR
.TEMPDIR
is usually undefined, which makes thisrm -rf /
. Even withoutsudo
, this will happily remove anything you have delete permissions to, which will usually include your entire home folder. If you let this run long enough, it'll nuke any drive connected to your machine, too, since you usually have write permissions to those.find ~ -name "TEMP*" -o -print | xargs rm
.find
will normally locate files matching certain criteria and print them out. Without the-o
this does what you'd expect and deletes every file starting withTEMP*
(as long as you don't have spaces in the path). But, the-o
means "or" (not "output" as it does for many other commands!), causing this command to actually delete all your files. Bummer.ln -sf link_name /some/important/file
. I get the syntax for this command wrong occasionally, and it will rather happily overwrite your important file with a useless symbolic link.kill -9 -1
will kill every one of your programs, logging you out rather quickly and possibly causing data loss.

- 2,100
-
3FYI (for others reading this)
find
has a-delete
argument which is much safer than piping toxargs rm
– Josh Aug 02 '17 at 16:26 -
Is modern MacOS really more crashproof? Most of these systems are for a single user. Do they really have sane maxprocs/cpulimits? Can you provide a reference? – user2497 Aug 02 '17 at 22:38
-
-
1You, of all people, would know well the damage
ln -sf
can do... and how to recover from it :-) – Iwillnotexist Idonotexist Aug 03 '17 at 09:41 -
1@Josh: thanks for pointing that out. And, in the general case, one should use
find -print0 | xargs -0
to safely handle strange characters in filenames. – nneonneo Aug 03 '17 at 14:18 -
1Agreed. More useful xargs advice: use
<whatever> | xargs echo <something>
first, to preview what commands xargs will actually run. xargs is a great example of why the CLI is so powerful: you can operate on many, many items at once without pesky confirmation and hand-holding... just make sure you're telling it to do what you want. – Josh Aug 03 '17 at 17:40
Another one you can do (that I have done by mistake before) is:
sudo chmod 0 /
This will render your entire file system (which means all commands and programs) unaccessible...except by the root user. This means you would need to log in directly as the root user and restore the file system, BUT you are unable to access the sudo
command (or any other command, for that matter). You can restore access to commands and files by booting into single-user mode, mounting and restoring the file system with chmod 755 /
.
If this is done recursively with chmod -R 0 /
then this will render the system unusable. The proper fix at that point is to use Disk Utility from the recovery partition to repair disk permissions. You may be better off just to restore a snapshot or backup of your file system if this was run recursively.

- 291
- 1
- 4
-
9"You can fix it by ... chmod 755 / " - No you cannot. Many files require different permissions from 755, either for security, or to work at all.
chmod 755 /
will leave your system insecure and broken in subtle ways. The only full recovery fromchmod 0 /
is through snapshot restore, backup restore, and/or reinstall. – marcelm Aug 01 '17 at 16:42 -
2@marcelm Good point. My suggestion was only to restore access to commands, not as a permanent fix. I've updated my answer to reflect that. As far as I know, chmod is not recursive unless you use the
-R
flag - so I thought subdirectories' permissions would not be affected? – musicman523 Aug 01 '17 at 17:41 -
5@marcelm you are right, but the command shown is not recursive so only
/
is affected. – Andrea Lazzarotto Aug 01 '17 at 20:24 -
I once
sudo chmod -R 700 /
a new computer, figuring it would be a lot more secure if I did that. Surprisingly, it booted, and ended up with an empty menubar and blank desktop. Nothing else worked, but the recovery partition's Disk Utility Restore Permissions actually managed to set almost everything right! – nneonneo Aug 01 '17 at 21:53 -
@musicman523 @AndreaLazzarotto Ah, good point about not having
-r
, I missed that. Yes, that would certainly change things. For the recursive case, my original comment still holds though :) – marcelm Aug 02 '17 at 11:05 -
2@marcelm Disk utility has a "Fix Permissions" option which should correct this without a full system restore – Josh Aug 02 '17 at 16:31
Answers that call sudo
should be considered invalid. These already assume
administrative access to the system.
Try perl -e 'exit if fork;for(;;){fork;}'
. OSX may have some safeguard against this now. If is presents an apple bubble asking if you want to terminate Terminal app and subprocesses, you're (almost) good.
while true ; do cat /dev/zero > /dev/null & done
is also very handy, esp. if you don't have perl
.
for i in 1 2 3 4 ; do cat /dev/zero > /dev/null & done
will just do a funny little CPU load test. Very good for checking if your heatsink and fan are up to par.

- 273
- 1
- 9
-
This is known as a Fork Bomb and will likely render the system unusable (could be considered a "crash") but will not likely cause any permanent damage. But it's nasty! – Josh Aug 02 '17 at 16:35
-
@Josh "but will not likely cause any permanent damage" Except for any currently-open unsaved work. – reirab Aug 03 '17 at 14:52
-
@reirab Josh added the catch-all 'likely' to his statement. But MacOS is mostly for editing photos and video now. Don't the Adobe programs have automatic auto-save? – user2497 Aug 03 '17 at 16:04
-
1Also, unsaved work is always at risk until it's saved. If your computer is rendered unusable, then you can't save anything you have open :) – Josh Aug 03 '17 at 17:37
-
@Josh MacOS is so easy to save stuff in. It's always -S. You shouldn't have written 'likely' – user2497 Aug 03 '17 at 17:40
sudo kill -9 -1
I accidently performed a kill -9 -1
in a perl-script, running as root.
That was as fast, as pulling the power-cord. On reboot, the server made a filesystem-check and continued running properly.
I never tried that sudo kill -9 -1
command on the commandline. It might not work, because the process-ID "-1" means "kill all processes that belongs to the caller's process-group".
Not sure, if with sudo, that also means init and all the kernel-stuff...
But if you are root, kill -9 -1
will definitely make an immediate stop - just like pulling the power-cord.
By the way - nothing will appear in logfiles, because that command is the fastest killer in the west!
Actually, to recover, I went to our sysadmins and told them, what I did. They did a hard reboot, because there was no way to log in on that server (RHEL6).
A kill -9 -1
as root kills every process, that runs as root. That is i.e. sshd. That logged me out immediately and prevented anyone from logging in again. Any process started by init - including init have been killed, unless they changed UID or GID. Even logging in through serial console wasn't possible any more. ps -eaf | grep root
shows some fancy processes, which, if they react on a SIGKILL in the default way, would pretty much stop even basic writing to HD.
I will not try this now on my laptop :-) I am not curious enough to finding out, if a kill -9 165
([ext4-rsv-conver]) would really stop writing to the HD.

- 100,768

- 81
- 1
-
You can't "kill" the kernel, and this shouldn't cause a filesystem check in and of itself. How did you recover from the situation? Did you do a hard reboot? Because that's probably what caused the filesystem check :) – Josh Aug 02 '17 at 16:21
-
Your edited answer makes sense. You can't actually kill
init
normally, but you can kill all gettys and SSH sessions and render the machine unusable. A Magic SysRq should have allowed for a clean reboot, but it's often easier to just power-cycle and rely on the FS journal :) – Josh Aug 02 '17 at 19:12
Sure, make sure you have a backup and save any files you care about, then type halt
Assuming you then use sudo
to be root, the Mac will crash.
The biggest risk from command line is data loss. The macOS interface is designed over decades to not surprise people and shred their data or settings or apps. The macOS graphical interface also exists to remove the learning curve (a steep one) to being safe and mastering shell scripting.
You lose those protections which is why I caution people starting with terminal app or ssh. If you have a backup you know works and have the time and confidence/skill to perform a restore, then you should dive in and learn and even break things.

- 235,889
-
3You said, "... and even break things.", which is a good use case for one to do risky stuff in a virtual machine. :) – user3439894 Jul 31 '17 at 21:44
-
2How will this crash? It just shuts the system down immediately. It even flushes kernel buffers so there's no (saved) data loss. https://developer.apple.com/legacy/library/documentation/Darwin/Reference/ManPages/man8/reboot.8.html – Josh Aug 02 '17 at 16:47
Yes, you can completely destroy your system. Accidentally doing something with sudo
privileges is one example that has been posted, whether it's forgetting a few characters that instruct the terminal to do something completely different than you intended. rm
ing /
instead of /tmp/\*
is only a 5 character difference. Putting a space in the wrong place could do something completely different as well. Other times, seemingly well meaning instructions could have malicious code obfuscated into it. Some people on the internet are very good at obfuscating code.
There are also commands that, using html, can be made font size zero, so something completely innocuous looking, when copied to the clipboard, could in fact be installing someone's git repo as a trusted source and downloading malware.
And there are commands that you can run that open you to exploit, or that could be perfectly well intended but removes important files or programs or corrupts your disk. In fact, using tools incorrectly could do something as basic as accidentally writing over your boot sector, or the head of your disk, or lots of other issues.
An example of something less destructive that hasn't been posted is opening binary files in vi
. If you've ever tried it, you'll know that it can mess up your terminal to the point that it's unusable until it is reset
.
Alternatively, there are commands that will bog down your machine, like:
yes >> /dev/null & yes >> /dev/null & yes >> /dev/null & yes >> /dev/null &
You can try that one, it's not going to do damage, but it will bog down your processor, and you'll have to kill each process you've spawned.
That being said, in computing it's generally taken that you can't make an omelette without breaking a few eggs. You should be cautious at the terminal, but the only way that one can become better at using the OS is by learning and practicing.

- 151
- 3
-
Your first example is hardly harmful. Vim is actually quite sane when editing binary files. And in the worst case you can just close down the window. The second example with "yes" is annoying and will use up a fair bit of user CPU, but the system will remain responsive and you can easily kill the parent terminal window. – nneonneo Aug 02 '17 at 03:06
-
1I disagree with "mess up your terminal to the point that it's unusable until you restart" -- try
reset
, that should clear up a terminal which has had binary output printed to it. or, just spawn a new TTY – Josh Aug 02 '17 at 16:27 -
@Josh I didn't know how to recover. I meant to use softer language, but forgot to go back and edit that. – jfa Aug 02 '17 at 19:37
-
1Cool, nw @JFA. It actually took me nay years to learn the
reset
trick! For more info: https://unix.stackexchange.com/questions/79684 – Josh Aug 02 '17 at 19:41 -
1@Josh thank you for that, it was a big help. It has indeed been many years for me :P – jfa Aug 02 '17 at 19:43
-
1@Josh Then 'stty sane^M' and 'tput reset' should also be exciting for you. – user2497 Aug 02 '17 at 22:43
-
@nneonneo You're right, I left the super malicious answers until the end. Other answers have better examples, so I tried to put examples that weren't in other answers. These two aren't as harmful, but they can be encountered on a daily basis. – jfa Aug 07 '17 at 14:34
I am only a bash beginner, but you could use something like this:
while True; do COMMAND; done;
Most people would try to use Ctrl+C to stop the command, not the external process (Ctrl+Z, which then need to be killed).
If the command in the while True
loop is a resource-intensive operation (such as multiplying large number to its own power), that could mess with your system resources and bog down your processor. However, modern operating systems are usually protected against such catastrophes.

- 141
- 2
-
2It just runs really fast, it won't crash anything. You just got to fork some intensive calculations so kernel maxprocs doesn't make you sad. try
while true do cat /dev/zero > /dev/null & done
– user2497 Aug 01 '17 at 14:06 -
thanks. I would have expected handling large numbers to make the computer go slow, it sometimes happened with very simple java/python programs I used for machine learning. – Ando Jurai Aug 01 '17 at 14:43
-
1The cat zero to null bit is a big number operation, in I/O at least. I use one of these per core of a CPU to do thermal tests. – user2497 Aug 01 '17 at 14:45
-
1And
^C
will kill the while loop also, but it just repeats too fast for the interrupt to be caught. Holding down^C
may break out of the loop. Closing the terminal also will :) – Josh Aug 02 '17 at 16:37 -
2@Josh It is easier to catch INT if there's a tiny pause, like sleep 0.1, after the cpu-intensive task. – user2497 Aug 02 '17 at 22:45
-
It's a little ambiguous what you mean by "crash" your computer... and there's no definitive correct answer for that, although there's some useful examples in other answers. Since your question is more ambiguous and general, I'd like to focus on the nature of the question and give a more general answer.
People who don't understand Terminal are often afraid to use it for fear that they might mess up their command and crash their computer
I think the command line is a double-edged sword, and often a very sharp one. Its greatest strength is also its biggest weakness for new users: CLI programs do what you say, without asking if it's really what you meant. They often don't ask for confirmation, they don't provide hand-holding or interactive help, and their options are short, often terse, sometimes confusing text-based strings. Note that they are generally very well documented, one just has to read the manual page for it (which is almost always man <command name>
) and take the time to understand what the command they are going to run will do.
This mode of operation is powerful -- it means that seasoned CLI users can craft long command "pipelines" which do complex tasks with single commands. This is because the task won't ask "Are you sure?" every step of the way, it does what it's told. But for a user unfamiliar with this mode, and used to a GUI where online help is a click away, it's unfamiliar and scary.
But are there actually commands that will crash your computer?
Can you "crash" your computer using the CLI? Maybe. You can certainly cause data loss if you use a destructive command incorrectly. E.G. many of the answers here mention rm
, a command which deletes files. Obviously, you can cause data loss with that command, it's what the command was designed to do.
As other answers have pointed out, you can use the command line to render your machine virtually unusable for a period of time: you can shut down without confirmation, cause a process to use 100% of your available resources without confirmation, kill all your programs or destroy your filesystem. If you really wanted to, you could use the CLI to craft a kernel extension which causes the kernel to panic (which is the closest to a "crash" I can think of).
The command line (accessed via the Terminal) is a powerful tool. Often it's faster to solve a problem using Terminal than the GUI. Some operations are only available using Terminal commands. However, the key to the CLI is understanding. Don't execute random commands you see online. Read the man pages and understand what commands do. If you're unsure, ask someone or learn more about a command before running it.

- 8,646
Surely you still can cause a system crash using commands entered with Terminal.
With years it's getting harder probably due to all kinds of limits and protective measures applied but as Murphy's-like law states: "Nothing is foolproof to a sufficiently capable fool."
"Fork bombs" and all that rm -rf
script kiddies stuff are anciently known things for UNIX. With Mac OS X you can have more fun using its GUI sub-system parts (WindowServer
to mention) or something like OpenBSD firewall aka PF
that Apple's engineers brought in but never managed to update since its 2008 state of things. PF
works in kernel so when it catches a quirk it's time Apple tells you "you restarted computer due to a panic" or stuff like this.
The worst part of this is you never can have an idea of where-n-why it panicked — cause Apple doesn't provide any meaningful stack traces; you can only have hex numbers of stack frame's return addresses.

- 994
-
Good answer, and excellent points. I would like to add to your list of fine ways to get OS X to panic on the dance floor my personal favorite, yet without explicit terms to avoid script kiddie stupidities. I unload a kernel extension relevant to NFC. Works every time, instantly. One can easily weaponize this into a DOS by scheduling this at a divisible number of like 5 mins. Hence it will boot then swan dive. This necessitates a reinstall of the OS given most admins and even techs will miss this.... – Francis from ResponseBase Sep 29 '18 at 10:14
sudo pkill WindowServer
orsudo pkill kernel_task
? I, er, don't feel like testing them right now, but, either would probably do the trick, if the system doesn't wag its finger at you. – John Smith Dec 17 '23 at 05:18