
Beginning Swift
By :

Like most programming languages, Swift includes a full complement of built-in data types that store numbers, characters, strings, and Boolean values.
In the
previous section, we covered the use of Swift optionals, and worked through several examples declaring an
Int
variable as optional and non-optional. Keep in mind that any Swift variable, of any type, can be declared as an optional.
Like most programming languages, Swift provides built-in numeric data types that represent either integer or floating-point values.
While it's
likely you'll develop Swift applications exclusively on 64-bit platforms, it's important to know that Swift is available on both 32-bit and 64-bit platforms. When using a generic integer numeric type (
Int
or
UInt
), the generic type will be mapped to an underlying, specific equivalent that matches the current platform's word size. For example, on a 64-bit platform,
Int
is mapped to
Int64;
on a 32-bit platform, the same
Int
type is mapped to an Int32.
The following table summarizes the available Swift numeric data types:
Type |
Min value |
Max value |
---|---|---|
|
-128 |
127 |
|
-32768 |
32767 |
|
-2.1 x 109 |
2.1 x 109 |
|
-9.2 x 1018 |
9.2 x 1018 |
|
0 |
255 |
|
0 |
65535 |
|
0 |
4.3 x 109 |
|
0 |
1.8 x 1019 |
|
-1.8 x 10308 |
1.8 x 10308 |
|
-3.4 x 1038 |
3.4 x 1038 |
Conceptually, a
UInt64
variable will consume four times more RAM than a
UInt8
variable, so you may ask, "
Should I tune my variables by selecting the smallest number of bits needed to meet requirements?"
While it may seem intuitive to
select the numeric type that uses the least RAM to store the variable's expected range of values, it's usually preferable to use the generic integer types (for example,
Int
when declaring integers and
Double
when declaring floating-point numbers).
This is a reference from The Swift Programming Language (Swift 4): " Unless you need to work with a specific size of integer, always use Int for integer values in your code. This aids code consistency and interoperability." Visit https://developer.apple.com/library/content/documentation/Swift/Conceptual/Swift_Programming_Language/ for the official documentation.
Integer
values may be instantiated using base 10 (decimal), base 2 (binary), base 8 (octal), or base 16 (hexadecimal) literal values, or by assigning another
Int
variable of the same type to the new variable.
For example, assigning
the number 100 to a new
Int
variable holding a duration in minutes can be done in any of the following ways:
let minutes = 100 // decimal let minutes = 0b1100100 // binary let minutes = 0o144 // octal let minutes = 0x64 // hexadecimal
Floating-point numbers
are represented by either
Float
or
Double
data types. In general, you should use
Double
—and employ Float only when specific circumstances require using the smaller, 32-bit numeric variable.
Declaring and assigning
value to floating-point variables follows the same syntax rules as with integer variables. For example, the following statement creates a new
Double
variable interestRate, and assigns an initial value to it:
var interestRate = 5.34
When assigning constant values to numeric types, Swift provides a handy format to make code more readable: the underscore character is ignored when parsing numeric literals.
This feature is most commonly used to provide groupings of thousands in a large integer or floating-point assignments, but actually can be used to provide any grouping separation that makes code more readable. For example, the following statements all assign the value 100,000 to the variable minutes:
var minutes = 100000 var minutes = 100_000 var minutes = 10_00_00 var minutes = 0b110_000110_101000_00
Using the underscore for readability can also be used for floating-point literal values. For example, the following statements are equivalent:
var balance = 10000.44556 var balance = 10_000.44_556
Like many fully compiled languages, Swift is a strongly typed language, and requires explicit type conversions (or casts) when assigning the value from one variable type to a variable of a different type.
Many new Swift programmers find that Swift is even stricter than languages they've used before. In many programming languages, the compiler will implicitly convert between data types during an assignment so long as the value contained within the variable being assigned (on the right of the equals sign) could not overflow the variable being assigned to (on the left of the equals sign).
In other words, in many languages, the following code would be legal, since an
Int8
is known to always
fit into an
Int16
without a numeric overflow:
Int8 smallNumber = 3; Int16 mediumNumber = smallNumber;
However, this equivalent code in Swift would result in a compile-time error:
var smallNumber: Int8 = 3 var mediumNumber: Int16 = smallNumber
This code would generate the following error:
error: cannot convert value of type 'Int8' to specified type 'Int16'
In Swift, it's always the programmer's responsibility to ensure that assignments have the same data type on the left and right of the assignment operator (that is, the equals sign). The following code corrects the compile-time error:
var smallNumber: Int8 = 100 var mediumNumber: Int16 = Int16(smallNumber)
This requirement for explicit type assignment is one reason why most Swift programming uses the generic numeric variables
Int
and
Double
, except when specific usage requires tuning for numeric range or memory storage size.
Now, let's see how to use various numeric variable types by following these steps:
Launch Xcode as before, and create a new playground named Topic B Using Numeric Types.playground.
Add the following code to the playground to create three
Int
variables, using binary, base10, and base16 literal notation, respectively:
var base2 = 0b101010 var base10 = 42 var hex = 0x2A
Now add the following three corresponding lines to print the data type and value for each of the variables you just created.
print("Printing \(type(of: base2)): \(base2)") print("Printing \(type(of: base10)): \(base10)") print("Printing \(type(of: hex)): \(hex)")
Examining the output, note that the three variables all have the same data type (Int
) and same value (42 in base 10).
Add the following lines of code to create two more variables, and to print the types and values for each:
var scientific = 4.2E+7 let double = 4.99993288828 print("Printing \(type(of: scientific)): \(scientific)") print("Printing \(type(of: double)): \(double)")
Note that both variables were created as Double types—even though the value of the first is actually an Integer. Swift's inference system doesn't always look at the actual value. In this case, the presence of scientific notation in the literal value caused Swift to assume the value should be a Double.
Now add the following lines to
cast and
round the variable named double to an
Int
:
var castToInt = Int(double) var roundToInt = Int(double.rounded()) print("Printing \(type(of: castToInt)): \(castToInt)") print("Printing \(type(of: roundToInt)): \(roundToInt)")
As you probably expected, the castToInt discarded the fractional value of the original double variable. For the roundToInt variable, we called the .rounded() function on the variable double, and then cast that value. Since 4.999 was rounded up to 5 before being cast, the Int contains the rounded value.
Finally, add the following lines to create a very large unsigned integer and then print its type and value:
var bigUnsignedNumber:UInt64 = 18_000_000_000_000_000_000 print("Printing \(type(of: bigUnsignedNumber)): \(bigUnsignedNumber)")
This code works as expected—printing an integer with 20 digits (the underscore is added to help count how many digits there are).
Note that in this case, we specified
UInt64
should be the data type for this variable. Had we not made the type explicit, Swift's type inference rules would have assigned the smaller Int data type to the variable, and it would have overflowed.
Again, keep in mind the inference engine examines the format of a constant perhaps more than the value of the numeric value being assigned. You should rely on the inference engine by default, but keep in mind you may sometimes need to be explicit when you know more about how a variable will be used than Swift can infer.
In Swift, the Boolean data type is Bool, and stores a value of true or false. As with other data types, in the case that a Bool value is not yet known, a Bool can be declared as optional, for example, Bool?.
For example, the following code declares a Boolean in Swift, and then changes its value:
var isChecked = false isChecked = true
Testing for the value of a Bool value is similar to how we do it in other C-inspired languages, for example:
if isChecked { // statements to execute if isChecked is true } if isChecked == true { // statements to execute if isChecked is true } if !isChecked { // statements to execute if isChecked is false }
The Character data type in Swift is an extended grapheme cluster.
What does that mean?
An extended grapheme cluster is an ordered sequence of one or more Unicode scalars (that is, values) that, when taken together, produce a human-readable character.
Most important to understand is that, unlike ASCII or ANSI character representations many programmers have worked with before, a Character in Swift may be made of more than one Unicode value.
In Swift 4, the underlying complexities of Unicode, scalar values, and extended grapheme clusters are largely managed for you, but as you begin to work natively with Unicode characters and strings, bear in mind that the Swift Character/String architecture was developed from the ground up around Unicode character representation—not ANSI/ASCII as many other languages were.
The following are two examples creating new Character variables, and assigning literal values:
let ch1:Character = "A" let ch2:Character = "😎"
Note the following regarding this assignment:
In Swift, a Character literal is delimited by a double quote, rather than the single quote that's common in most C-inspired languages.
Because the Swift compiler's type inference rules will assume double quotes around a literal imply a string variable, the above ch1 assignment must explicitly declare the variables as Character type—otherwise the Swift compiler will create ch1 as a string.
To construct a Character type using Unicode values, you can assign an escape sequence, or use the UnicodeScalar struct to create a Character using numeric Unicode values as input.
The following line of code creates a UnicodeScalar from the value 65 (the ASCII value for the English letter A), and then assigns it to the immutable variable ch1:
let ch1 = Character(UnicodeScalar(65))
In this case, there is no ambiguity with regards to double quotation marks, so it's not necessary to explicitly assign the Character type during this assignment.
It's also common to construct a Character using a UnicodeScalar escape sequence within double quotation marks. The following creates a character variable containing an emoji character represented by the UnicodeScalar 1F601:
let ch3 = "\u{1F601}" // sets ch3 to "😁"
While Unicode scalars are conceptually similar to ASCII/ANSI value encoding, Swift Characters may be made of more than one numeric value, while ASCII and ANSI use only one numeric value to represent each character.
For example, an accented Western letter is expressed by providing a UnicodeScalar containing two character values.
We can construct the Unicode representation of an accented e character as follows:
let ch4 = "e\u{301}" // é
The expression on the right of the assignment contains the literal letter e, followed by the escaped value for the accent modifier (301). The Swift compiler combines these two elements into a single extended grapheme cluster.
Strings in Swift are very similar to strings in other programming languages. As string handling is so central to any application development project, we'll dedicate an entire subsequent lesson to Swift's powerful string handling capabilities. In this section, we'll discuss the basics for declaring and using a string.
Fundamentally, strings are arrays of the Character types, supporting the familiar assignment operator (=), substrings, concatenation, and C-inspired escape characters.
Instantiating a string variable is highly intuitive. The following statements create string variables:
var alphabet = "ABCDEFGHIJKLMNOPQRSTUVWXYZ" let macCharacters = "⌘⌃⌥⇧ ⏎⌫⇪⎋⇥" let emoji = "😎😂🎃🐳🍎😜😆"
As in many languages, Swift strings can be concatenated using the plus (+) operator:
let alphaMac = alphabet + macCharacters
String also supports the unary addition operator:
alphabet += macCharacters
One difference between Swift strings and strings in many languages is how individual elements of strings are accessed. Specifically, the following syntax with Swift strings is illegal:
let ch = alphabet[4] error: 'subscript' is unavailable: cannot subscript String with an Int, see the documentation comment for discussion
In Swift, the input to the subscript operator (that is, what's between the [] characters) is expected to be of type String.Index, not
Int
.
In practice, you will construct an Index, then pass the index to the substring operator, for example:
let idx = alphabet.index(alphabet.startIndex, offsetBy: 4) let ch = alphabet[idx] // ch is assigned the character "E"
Obtaining the length of string is quite easy—simply call the count property of a string:
var alphabet = "ABCDEFGHIJKLMNOPQRSTUVWXYZ" let alphabetLength = alphabet.count // 26
We have now reached the end of this section. Here, we worked with the different data types in Swift, specifically numeric, Boolean, character, and string data types.
Now that you've learned about the various data types available with Swift, let's put this knowledge into practice by using various types together, and also using the Apple Foundation framework.
Use an Xcode playground to practice various data types. You'll be using numeric data types, formatting them as strings, and using string interpolation to print string values from various data types.
Launch Xcode as before, and create a new playground named Data Type Summary.playground.
Add the following code to the playground to create an immutable
Double
with an initial value:
let dVal = 4.9876
Next, create
a Boolean mutable variable with an initial value of true, and another variable set to the
Double
variable after rounding to a whole number:
var iValRounded = true var iVal = Int(dVal.rounded())
Next, we're going to use a class from Foundation to create a string representation of the
Double
value, rounded to two digits. If you're not familiar with NumberFormatter, don't worry. This is just one of the many utility classes Apple provides in its expansive SDK for macOS and iOS:
var formatDigits = 2 let nf = NumberFormatter() nf.numberStyle = .decimal nf.maximumFractionDigits = formatDigits let formattedDouble = nf.string(from: NSNumber(value: dVal)) ?? "#Err"
Because NumberFormatter.string returns an optional, we need either to check it (with if/let, or as here, provide a default value ("#Err") in case the function does return nil.
Now add the following line to print a statement about the values we've created:
print("The original number was \(formattedDouble) (rounded to \(formatDigits) decimal places), while the value \(iValRounded ? "rounded" : "unrounded") to Integer is \(iVal).")
The output of this code is as follows:
The original number was 4.99 (rounded to 2 decimal places), while the value rounded to Integer is 5.
Finally, add the following lines to change the rounding strategy, and print a sentence about the result of the new string conversions:
formatDigits = 0 nf.maximumFractionDigits = formatDigits formattedDouble = nf.string(from: NSNumber(value: dVal)) ?? "#Err" iValRounded = false iVal = Int(dVal) print("The original number was \(formattedDouble) (rounded to \(formatDigits) decimal places), while the value \(iValRounded ? "rounded" : "unrounded") to Integer is \(iVal).")
The output of this second sentence is as follows:
The original number was 5 (rounded to 0 decimal places), while the value unrounded to Integer is 4.
Change the font size
Change margin width
Change background colour