Know your bits from your bytes
The reason why computers use binary arithmetic is because it is easy to tell the difference between 0 and 1. In a real-life system, 0 might be represented by a low voltage, 1 by a high voltage. Whatever noise or interference gets into the signal is unlikely to breach the divide between the two states. If computers used four states rather than two, then it would be a little less certain. There are some digital systems that are not binary, but these are very much the exception.
To explain the terminology therefore, a digit that can be either 0 or 1 is therefore a binary digit, or bit for short, abbreviated b. The bit is the smallest unit of digital data and is indivisible - there is nothing smaller.
For historical reasons, it became common to group bits into sets of eight. Someone who probably thought they were being very clever thought it would be a good idea to call a set of eight bits a byte, abbreviated B. And would you believe there is such a thing as half a byte, called a nibble, sometimes called a nybble? There was one once-famous piece of digital equipment that had a peculiar test routine called a 'nibble swap'. Don't ask.
By now you probably need to know why programmers' favorite food is pizza. They can hold the pizza with one hand while they hack at the keyboard with the other!
The next step up in data volume is the Kilobyte, which is 1024 bytes. "Hang on!", you say, "I thought 'kilo' meant 1000?". Not in computerish it doesn't. 1024 is what programmers call a nice round number. And by the way, when they count in good old fashioned decimal, they don't start at 1, they start at 0.
So 1 KB is one Kilobyte. 1 Kb is one Kilobit. It is important to make the difference between B and b - it's a big difference.
Onwards and upwards, there is more than one definition of the Megabit and Megabyte. Some references use Megabyte to mean 1,000,000 bytes, others to mean 1024 Kilobytes = 1,048,576 bytes. To clear the confusion the IEC (International Electrotechnical Commission) recommend the unit of the mebibyte, which means 1,048,576 bytes, like the larger version of the Megabyte.
Do you need to know mebibytes? No - no-one else does. And the difference between the two versions of the Megabyte are so small that it hardly matters.
Further than that there are gibibytes (don't need to know), Gigabytes (GB - approx. 1000 Megabytes), Terabytes (TB - approx. 1000 Gigabytes) and Petabytes (PB - approx. 1000 Terabytes). Right now, we don't need anything bigger than that for anything you might want to do in the real world.
By the way - Megabyte or megabyte? 'Megabyte' is the old standard, 'megabyte' is the new IEC standard. To hell with standards, does it really matter?