Integer Types
A data type is really nothing more than an organization of bits.
To demonstrate this idea, it’s visually pleasing to use the stdint.h
header file from the C99 Standard Library (or C11 too I suppose).
#include <stdint.h>
This header file provides easy shortcut data types for a certain number of bits:
int8_t signed 8bit uint8_t unsigned 8bit int16_t signed 16bit uint16_t unsigned 16bit int32_t signed 32bit uint32_t unsigned 32bit int64_t signed 64bit uint64_t unsigned 64bit
If a data type is signed one bit is used to represent that sign, and so the upper limit on that data type is reduced from its unsigned counterpart. We see this in the chart below where the positive value of an unsigned vs signed int falls from 255 (8bit max) to 127 (7bit max). The addition of the sign allows for the representation of negative values where necessary.
Type Lower Limit Upper Limit int8_t 128 127 uint8_t 0 255 int16_t −32,768 32,767 uint16_t 0 65,535 int32_t 2,147,483,648 2,147,483,647 uint32_t 0 4,294,967,295 int64_t 9,223,372,036,854,775,808 9,223,372,036,854,775,807 uint64_t 0 18,446,744,073,709,551,615
Beyond 18,446,744,073,709,551,615 there is only customized support for greater bitdepth using the C language in the form of extensions.
Everything else is semantics and interpretation. When we declare an int, we’re really just asking for a certain number of bits from memory.
Consider the following C program:
#include <stdio.h> #include <stdlib.h> int main(void) { // get the bytewidth of a local system int printf("sizeof(int): %d bytes\n", sizeof(int)); // display message switch(sizeof(int)) { // 16bit case 2: printf("This system uses 16bit integers\n"); break; // 32bit case 4: printf("This system uses 32bit integers\n"); break; // 64bit case 8: printf("This system uses 64bit integers\n"); break; // 128bit case 16: printf("This system uses 128bit integers\n"); break; // default default: printf("Unable to determine byte width of int\n"); break; } // success return 0; }
And its output:
$ ./this sizeof(int): 4 bytes This system uses 32bit integers
This bit of logic demonstrates that on this system the bit width of a single int is 32bits or 4 bytes.
The common C data types and their aliases are listed here. Note that each data type has a minimum specification for value.
Despite the minimum 16bit requirement for integers declared using int
, some systems will return larger spaces and so overflow situations will differ between the smaller and larger numbers.
Its important to know what the limitations of each system are before developing for it. Certain systems may be limited to 16bit or even 8bit blocks.
Note also how many different aliases there are for each bitwidth and what their format specifiers are.
Data Types & Format Specifiers 

Data Type (alias)  Format Specifier 
char  %c 
signed char  %c (or %hhi for numerical output) 
unsigned char  %c (or %hhu for numerical output) 
short short int signed short signed short int 
%hi 
unsigned short unsigned short int 
%hu 
int signed signed int 
%i or %d 
unsigned unsigned int 
%u 
long long int signed long signed long int 
%li 
unsigned long unsigned long int 
%lu 
long long long long int signed long long signed long long int 
%lli 
unsigned long long unsigned long long int 
%llu 
Range and Precision
In mathematics we have numbers with fractions, and thus an integer with a decimal and more digits stuck to it. These kinds of numbers are managed differently from integers to accommodate a balance of range and precision and have special declarations, though they still ultimately refer to a certain bitwidth in memory.
float  for formatted input: %f %F for digital notation, or %g %G, or %e %E %a %A for scientific notation 
double  %lf %lF %lg %lG %le %lE %la %lA;for formatted output, the length modifier l is optional. 
long double  %Lf %LF %Lg %LG %Le %LE %La %LA 
The only reason any series of bits has any relevance is because its mapped to a context. This is how we’re able to have a float
of value 1.1
and to be able to contextually perform calculations with this value within the C language syntax.
Floatingpoint numbers are socalled because they are structured with a floating radix point to represent numbers as decimals in the context of scientific notation.
Every new variable in C has is declared first by using its data type:
Data Type Context int Integer long Larger Integer float FloatingPoint Number double Larger FloatingPoint Number