How do 1s & 0s make a picture and how does a coding make a picture ?

I'm confused about how 1s and 0s make up a picture like on a tv or computer and how does writing code make a visual image. I'm a beginner when it comes to computer science but I'm interested in the simulation and holographic theory.

Update:

Thanks Alan but I still don't understand. Where should I start in order to understand the relationship of pixels to the formation of color and picture?

13 Answers

Relevance
  • Tasm
    Lv 6
    2 weeks ago

    One pixel is not just 1 or 0. Its more like 0000 and that would be a black pixel, then 0001 would be a little lighter color etc.

  • 4 weeks ago

    okokk ok they dont

  • garry
    Lv 4
    4 weeks ago

    your eyes make the picture , its just your brain can asimulate   he actual drawing it can recognise , after all our eyes can be easily tricked .this 1 and 0's only cause a shadow to appear .

  • Embery
    Lv 4
    4 weeks ago

    This isn't for everybody, there was a guy in my cores that couldn't get past the fact that you don't need to know how functions or pointers or try/catch works, you just need to know how to use them. You sound like him.

  • How do you think about the answers? You can sign in to vote the answer.
  • 4 weeks ago

    Look at a picture. Imagine the image you see is made up of small little dots. Very small little dots. Each picture is composed of millions of dots put together. If enough black dots can be put together, they will form a figure. If this is the case, you can create a black image on a white background.

    You can command a computer to send a black dot to a particular position to create a spot. You can also command the computer to send many black dots to many positions to create an image.

    You can also tell the computer that 1 can mean it will send a black dot while 0 means it sends no signal. This will ultimately create a picture.

  • Anonymous
    4 weeks ago

    1s is on  0 is off

  • Anonymous
    4 weeks ago

    You've heard of "RGB"? Red, green, and blue, for the primary colors of light. Each pixel is represented by a set of numbers (usually in binary-friendly powers of two) describing the brightness of each color: 255,255,255 would say "full red", "full green", "full blue", which combined turns out to be "white". 255,0,0 would be "red", 0,255,0 would be "green", etc., and 0,0,0 would be "black". At its least complicated (and boy does it get complicated) you would have three bytes representing the first pixel, three bytes representing the second pixel, and so on down the row. 

  • 4 weeks ago

    On a computer screen or TV, the image is actually made up of series of tiny dots of light.  If you look at your computer resolution, for example, it will be a set of numbers like 1920 x 1080.  that means that the computer is display 1920 tiny dots of lights across your screen.  Each dot is a called a pixel (for "picture element').  There are 1080 rows of 1920 pixels that make up the screen.  (This is over 2 million pixels that make up the image of our screen.)  The "0" and "1" in the code either turns on or turns off one of those pixel.  The pattern of pixels that on or off is what makes up the image on your screen.

    It gets a little more complicated then that, in that each of those pixel can also be a different color (red, blue or green besides white)  and can be set at different degrees of intensity/brightness.  So there will be a set of 0 and 1 for white, a set for red, a set for blue, a set for green, and a set of how bright each pixel is.  All of those together form that image on your screen.  But it is all stored as either a '0' (the light of that color is off) or a '1' (the light of that color is on.

    Now there are different picture formats that can store this data more efficiently that endless lines of 0 and 1.  For example, if you have a picture of the sky were there are 200 blue dots in a row, rather than have 200 "1;s" in a row, it will store "200 1".  The computer then knows to put 200 blue pixels in a row. There are formats such as jpg or gif that use "tricks" like that to reduce the size of the file that holds an image.  But there is also come loose of clarity when you do.  Other formats like bmp store every single pixel separately.  It would have 200 "1" in a row.  But the size of the file would be 10 times that of a jpg or a gif.

    Do a google search for "ascii area".  You will see hundreds of picture that are made up typing lines of dots and other characters. If you want to store one of those images, you would just store all the characters that make up the image.  The "0's" and "1.s" that create the picture.  To see the image, you load all those characters back into your computer and display them

  • 4 weeks ago

    The vast majority of computers covert binary digital signals into pictures by mapping each byte of information into a position, intensity, hue and color on a viewing screen.  For example, assume we have the byte 00001001 made up of eight bits (the zeros and ones).  That could represent a position for a pixel to light up on your computer screen.  Then maybe 100010001 is another byte that lights up the pixel right next to the first one on your screen.

    There are millions of points (pixels) on your screen; so there are millions of bytes like the two I listed here.  And each byte is coded to result in a pixel to go dark, light up, show red, etc.  You put these megabytes (millions of bytes) together and they'll show you a picture (an image) on your screen.

    As to how we get all those millions of bytes, we write source code to do that.  Typically source code is something you and I can read.  It's terse but plain looking language.  

    In Excel I can write RAND() as an MS Excel source code function for giving me a random number.  But my computer can't read RAND so that's changed into a group of bytes that are then processed to create more bytes that represent random numbers in bytes.  Those are then converted into real life numbers that you and I can read.

    Bottom line most computers deal in binary code of zeros and ones.  And the images on your computer screen are created by millions of bytes of that code that represent something on your screen.  You should know, that the bytes in today's PC are typically 64 bits long, not eight like I showed you.  Mainframe computers deal in bytes that are even bigger than that.

  • Alan
    Lv 7
    4 weeks ago

    Any number can be represented in binary (1's and 0's) 

    so you can have an array of numbers which represent the 

    color of each pixel to be displayed. 

    so you can have 256 different colors by just  

    using the number's 0 to 255. (8 bits) 

    Then, you can create an array the size of the image 

    with one bit indicating the color of each pixel to be displayed. 

    if you have 1024 X 768 pixel display , you 

    can create a 2 dimensional array of  

    1024 X 768  bytes defining the color of each pixel. 

    That will define the picture. 

    Actually in some cases,  each pixel may have  

    an red, green, and blue intensity number.   

    each defined as a separate byte  

    In this case, each pixel will have 3 bytes defining its' color.    

    Programming languages already have the setup 

    to convert color codes into the screen display. 

Still have questions? Get your answers by asking now.