Oracle Database 11g Architecture

Introduction to Oracle Database11g

A database is a collection of data treated as a unit. The purpose of a database is to store and retrieve related information. A database server is the key to information management. In general, a  server reliably manages a large amount of data in a multiuser environment so that many users can concurrently access the same data. A database server also prevents unauthorized access and provides efficient solutions for failure recovery.

Oracle Database is the first database designed for enterprise grid computing, the most flexible and cost effective way to manage information and applications. Enterprise grid computing creates large pools of industry-standard, modular storage and servers. With this architecture, each new system can be rapidly provisioned from the pool of components. There is no need for peak workloads, because capacity can be easily added or reallocated from the resource pools as needed.

The database has logical structures and physical structures. Because the physical and logical structures are separate, the physical storage of data can be managed without affecting the access to logical storage structures.

Oracle Database 11g Architecture

Basically, there are two main components of Oracle database –– instance and database itself. An instance consists of some memory structures and the background processes, whereas a database refers to the disk resources.

DBArchitecture11g

In this section, will cover the following topics related to the Oracle architecture:

             — >> Process Architecture

— >> Oracle memory structures

— >>  Oracle background processes

— >>  Overview of Physical & logical Database Structures

— >>  Overview of Parameter File, Server Parameter File & Password file

— >> Startup & Shutdown Process

—- >>> Process Architecture

A process is a “thread of control” or a mechanism in an operating system that can run a series of steps. Some operating systems use the terms job or task. A process generally has its own private memory area in which it runs.

An Oracle database server has two general types of processes: user processes and Oracle processes.

 DBPrcessarchitecture2

User (Client) Processes

User processes are created and maintained to run the software code of an application program (such as an OCI or OCCI program) or an Oracle tool (such as Enterprise Manager). User processes also manage communication with the server process through the program interface, which is described in a later section.

Oracle Processes

Oracle processes are invoked by other processes to perform functions on behalf of the invoking process.

Oracle creates server processes to handle requests from connected user processes. A server process communicates with the user process and interacts with Oracle to carry out requests from the associated user process. For example, if a user queries some data not already in the database buffers of the SGA, then the associated server process reads the proper data blocks from the datafiles into the SGA.

Oracle can be configured to vary the number of user processes for each server process. In a dedicated server configuration, a server process handles requests for a single user process. A shared server configuration lets many user processes share a small number of server processes, minimizing the number of server processes and maximizing the use of available system resources.

—- >>> Oracle Memory Structures

Focus first on the memory components of the Oracle instance. This set of memory components represents a “living” version of Oracle that is available only when the instance is running. There are two basic memory structures on the Oracle instance. The first and most important is called the System Global Area, which is commonly referred to as the SGA. The other memory structure in the Oracle instance is called the Program Global Area, or PGA. This discussion will explain the components of the SGA and the PGA, and also cover the factors that determine the storage of information about users connected to the Oracle instance.

The Oracle SGA

The Oracle SGA is the most important memory structure in Oracle. When DBAs talk about most things related to memory, they are talking about the SGA. The SGA stores several different components of memory usage that are designed to execute processes to obtain data for user queries as quickly as possible while also maximizing the number of concurrent users that can access the Oracle instance. The SGA consists of three different items, listed here.

            ->The buffer cache

            ->The shared pool

            ->The redo log buffer

          ->The Large Pool

          ->The Java Pool

The Buffer cache consists of buffers that are the size of database blocks. They are designed to store data blocks recently used by user SQL statements in order to improve performance for subsequent selects and data changes. The shared pool has two required components and one optional component. The required components of the shared pool are the shared SQL library cache and the data dictionary cache. The optional component of the shared pool includes session information for user processes connecting to the Oracle instance. The final area of the SGA is the redo log buffer, which stores online redo log entries in memory until they can be written to disk.

Explore the usage of the shared pool in the Oracle database architecture. The shared SQL library cache is designed to store parse information for SQL statements executing against the database. Parse information includes the set of database operations that the SQL execution mechanism will perform in order to obtain data requested by the user processes. This information is treated as a shared resource in the library cache. If another user process comes along wanting to run the same query that Oracle has already parsed for another user, the database will recognize the opportunity for reuse and let the user process utilize the parse information already available in the shared pool. Of course, the specific data returned by the query for each user will not reside in the shared pool, and thus not be shared, because sharing data between applications represents a data integrity/security issue.

The other mandatory component of the shared pool is the data dictionary cache, also referred to by many DBAs as the “row” cache. This memory structure is designed to store the data from the Oracle data dictionary in order to improve response time on data dictionary queries. Since all user processes and the Oracle database internal processes use the data dictionary, the database as a whole benefits in terms of performance from the presence of cached dictionary data in memory.

The Redo log buffer allows user processes to write their redo log entries to a memory area in order to speed processing on the tracking of database changes. One fact that is important to remember about redo logs and user processes is that every process that makes a change to the database must write an entry to the redo log in order to allow Oracle to recover the change. When the database is set up to archive redo logs, these database changes are kept in order to rebuild database objects in the event of a disk failure. The availability of a buffer for storing redo information in memory prevents the need for user processes to spend the extra time it takes to write an entry directly to disk. By having all user processes writing those redo log records to memory, the Oracle database avoids contention for disk usage that would invariably cause database performance to slow down. Since every data change process has to write a redo log entry, it makes sense that processes be able to write that change as quickly as possible in order to boost speed and avoid problems.

The final SGA resource is the buffer cache. This area of memory allows for selective performance gains on obtaining and changing data. The buffer cache stores data blocks that contain row data that has been selected or updated recently. When the user wants to select data from a table, Oracle looks in the buffer cache to see if the data block that contains the row has already been loaded. If it has, then the buffer cache has achieved its selective performance improvement by not having to look for the data block on disk. If not, then Oracle must locate the data block that contains the row, load it into memory, and present the selected output to the user. There is one overall performance gain that the buffer cache provides that is important to note. No user process ever interfaces directly with any record on a disk. This fact is true for the redo log buffer as well.

After the user’s select statement has completed, Oracle keeps the block in memory according to a special algorithm that eliminates buffers according to how long ago they were used. The procedure is the same for a data change, except that after Oracle writes the data to the row, the block that contains the row will then be called a dirty buffer, which simply means that some row in the buffer has been changed. Another structure exists in the buffer cache, called the dirty buffer write queue, and it is designed to store those dirty buffers until the changes are written back to disk.

The Large pool is an optional area of memory in the SGA. It is used to relieves the burden place on the shared pool. It is also used for I/O processes. The large pool size can be set by LARGE_POOL_SIZE parameter in init.ora initialization parameter file.

The Java pool is used to services parsing of the Java commands. Its size can be set by JAVA_POOL_SIZE parameter in init.ora initialization parameter file.

Program global area (PGA)

A PGA is a memory region that contains data and control information for a server process. It is non shared memory created by Oracle Database when a server process is started. Access to the PGA is exclusive to the server process. There is one PGA for each server process. Background processes also allocate their own PGAs. The total memory used by all individual PGAs is known as the total instance PGA memory, and the collection of individual PGAs is referred to as the total instance PGA, or just instance PGA. You use database initialization parameters to set the size of the instance PGA, not individual PGAs.
 

Buffer States

The database uses internal algorithms to manage buffers in the cache. A buffer can be in any of the following mutually exclusive states:

Unused

The buffer is available for use because it has never been used or is currently unused. This type of buffer is the easiest for the database to use.

Clean

This buffer was used earlier and now contains a read-consistent version of a block as of a point in time. The block contains data but is “clean” so it does not need to be check pointed. The database can pin the block and reuse it.

Dirty

The buffer contains modified data that has not yet been written to disk. The database must checkpoint the block before reusing it.

Every buffer has an access mode: pinned or free (unpinned). A buffer is “pinned” in the cache so that it does not age out of memory while a user session accesses it. Multiple sessions cannot modify a pinned buffer at the same time.

—- >>> Oracle background processes

Background Processes

 As is shown here, there are both mandatory, optional, and slave background processes that are started whenever an Oracle Instance starts up.  These background processes serve all system users.  We will cover mandatory process in detail.

          Mandatory Background Processes

— >> Process Monitor Process (PMON)

— >> System Monitor Process (SMON)

— >> Database Writer Process (DBWn)

— >> Log Writer Process (LGWR)

— >> Checkpoint Process (CKPT)

 Optional Processes

— >> Archiver Process (ARCn)

— >> Coordinator Job Queue (CJQ0)

— >> Dispatcher (number “nnn”) (Dnnn)

This query will display all background processes running to serve a database:

SELECT PNAME

FROM   V$PROCESS

WHERE PNAME IS NOT NULL

ORDER BY PNAME;

 PMON (Process Monitor)

 The process monitor (PMON) monitors the other background processes and performs process recovery when a server or dispatcher process terminates abnormally. PMON is responsible for cleaning up the database buffer cache and freeing resources that the client process was using. For example, PMON resets the status of the active transaction table, releases locks that are no longer required, and removes the process ID from the list of active processes.

PMON also registers information about the instance and dispatcher processes with the Oracle Net listener (see “The Oracle Net Listener”). When an instance starts, PMON polls the listener to determine whether it is running. If the listener is running, then PMON passes it relevant parameters. If it is not running, then PMON periodically attempts to contact it..

SMON (System Monitor)

The System Monitor (SMON) does system-level cleanup duties. 

It is responsible for instance recovery by applying entries in the online redo log files to the data files. 

Other processes can call SMON when it is needed.

It also performs other activities as outlined in the figure shown below.

If an Oracle Instance fails, all information in memory not written to disk is lost.  SMON is responsible for recovering the instance when the database is started up again.  It does the following:

Rolls forward to recover data that was recorded in a Redo Log File, but that had not yet been recorded to a datafile by DBWn.  SMON reads the Redo Log Files and applies the changes to the data blocks.  This recovers all transactions that were committed because these were written to the Redo Log Files prior to system failure.

Opens the database to allow system users to logon.

Rolls back uncommitted transactions.

 SMON also does limited space management.  It combines (coalesces) adjacent areas of free space in the database’s data files for table spaces that are dictionary managed. 

 It also deallocates temporary segments to create free space in the data files.

 DBWn (Database Writer)

The Database Writer writes modified blocks from the database buffer cache to the data files.

 One database writer process (DBW0) is sufficient for most systems.

The initialization parameter DB_WRITER_PROCESSES specifies the number of DBWn processes. 

 The purpose of DBWn is to improve system performance by caching writes of database blocks from the Database Buffer Cache back to data files. 

Blocks that have been modified and that need to be written back to disk are termed “dirty blocks.” 

The DBWn also ensures that there are enough free buffers in the Database Buffer Cache to service Server Processes that may be reading data from data files into the Database Buffer Cache. 

Performance improves because by delaying writing changed database blocks back to disk, a Server Process may find the data that is needed to meet a User Process request already residing in memory!

 DBWn writes to data files when one of these events occurs that is illustrated in the figure below.

 LGWR (Log Writer)

 The Log Writer (LGWR) writes contents from the Redo Log Buffer to the Redo Log File that is in use. 

These are sequential writes since the Redo Log Files record database modifications based on the actual time that the modification takes place. 

LGWR actually writes before the DBWn writes and only confirms that a COMMIT operation has succeeded when the Redo Log Buffer contents are successfully written to disk. 

LGWR can also call the DBWn to write contents of the Database Buffer Cache to disk. 

The LGWR writes according to the events illustrated in the figure shown below.

 CKPT (Checkpoint)

The Checkpoint (CPT) process writes information to update the database control files and headers of data files.

A checkpoint identifies a point in time with regard to the Redo Log Files where instance recovery is to begin should it be necessary. 

It can tell DBWn to write blocks to disk.

A checkpoint is taken at a minimum, once every three seconds. 

 Think of a checkpoint record as a starting point for recovery.  DBWn will have completed writing all buffers from the Database Buffer Cache to disk prior to the checkpoint, thus those records will not require recovery.  This does the following:

Ensures modified data blocks in memory are regularly written to disk – CKPT can call the DBWn process in order to ensure this and does so when writing a checkpoint record.

Reduces Instance Recovery time by minimizing the amount of work needed for recovery since only Redo Log File entries processed since the last checkpoint require recovery.

Causes all committed data to be written to data files during database shutdown.

DBarchckpt1

If a Redo Log File fills up and a switch is made to a new Redo Log File (this is covered in more detail in a later module), the CKPT process also writes checkpoint information into the headers of the data files. 

 Checkpoint information written to control files includes the system change number (the SCN is a number stored in the control file and in the headers of the database files that are used to ensure that all files in the system are synchronized), location of which Redo Log File is to be used for recovery, and other information.

 CKPT does not write data blocks or redo blocks to disk – it calls DBWn and LGWR as necessary.

MMON (Manageability Monitor Process) and MMNL (Manageability Monitor Lite Process)

The Manageability Monitor Process (MMNO) performs tasks related to the Automatic Workload Repository (AWR) – a repository of statistical data in the SYSAUX tables pace (see figure below) – for example, MMON writes when a metric violates its threshold value, taking snapshots, and capturing statistics value for recently modified SQL objects.

DBArchMMON1

The Manageability Monitor Lite Process (MMNL) writes statistics from the Active Session History (ASH) buffer in the SGA to disk. MMNL writes to disk when the ASH buffer is full.

—- >> Optional Background Processes

Optional Background Process Definition:

ARCn: Archiver – One or more archive processes copy the online redo log files to archival storage when they are full or a log switch occurs.

CJQ0:  Coordinator Job Queue – This is the coordinator of job queue processes for an instance. It monitors the JOB$ table (table of jobs in the job queue) and starts job queue processes (Jnnn) as needed to execute jobs The Jnnn processes execute job requests created by the DBMS_JOBS package.

Dnnn:  Dispatcher number “nnn”, for example, D000 would be the first dispatcher process – Dispatchers are optional background processes, present only when the shared server configuration is used. Shared server is discussed in your readings on the topic “Configuring Oracle for the Shared Server”. 

FBDA: Flashback Data Archiver Process – This archives historical rows of tracked tables into Flashback Data Archives. When a transaction containing DML on a tracked table commits, this process stores the pre-image of the rows into the Flashback Data Archive. It also keeps metadata on the current rows.  FBDA automatically manages the flashback data archive for space, organization, and retention

 Of these, you will most often use ARCn (archive) when you automatically archive redo log file information (covered in a later module).

ARCn (Archiver)

 While the Archiver (ARCn) is an optional background process, we cover it in more detail because it is almost always used for production systems storing mission critical information.  

The ARCn process must be used to recover from loss of a physical disk drive for systems that are “busy” with lots of transactions being completed.

It performs the tasks listed below.

 When a Redo Log File fills up, Oracle switches to the next Redo Log File. 

The DBA creates several of these and the details of creating them are covered in a later module. 

If all Redo Log Files fill up, then Oracle switches back to the first one and uses them in a round-robin fashion by overwriting ones that have already been used.

Overwritten Redo Log Files have information that, once overwritten, is lost forever.

ARCHIVELOG Mode:

If ARCn is in what is termed ARCHIVELOG mode, then as the Redo Log Files fill up, they are individually written to Archived Redo Log Files.

LGWR does not overwrite a Redo Log File until archiving has completed. 

Committed data is not lost forever and can be recovered in the event of a disk failure. 

Only the contents of the SGA will be lost if an Instance fails.

In NOARCHIVELOG Mode:

The Redo Log Files are overwritten and not archived. 

Recovery can only be made to the last full backup of the database files. 

All committed transactions after the last full backup are lost, and you can see that this could cost the firm a lot of $$$.

 When running in ARCHIVELOG mode, the DBA is responsible to ensure that the Archived Redo Log Files do not consume all available disk space!  Usually after two complete backups are made, any Archived Redo Log Files for prior backups are deleted.

— >> Overview of Physical & logical Database Structures

Overview of Physical Database Structures

The following sections explain the physical database structures of an Oracle database, including datafiles, redo log files, and control files.

DBArchdatafile1

— >> Datafiles

Every Oracle database has one or more physical datafiles. The datafiles contain all the database data. The data of logical database structures, such as tables and indexes, is physically stored in the datafiles allocated for a database.

The characteristics of datafiles are:

A datafile can be associated with only one database.

Datafiles can have certain characteristics set to let them automatically extend when the database runs out of space.

One or more datafiles form a logical unit of database storage called a tablespace.

Data in a datafile is read, as needed, during normal database operation and stored in the memory cache of Oracle. For example, assume that a user wants to access some data in a table of a database. If the requested information is not already in the memory cache for the database, then it is read from the appropriate datafiles and stored in memory.

Modified or new data is not necessarily written to a datafile immediately. To reduce the amount of disk access and to increase performance, data is pooled in memory and written to the appropriate datafiles all at once, as determined by the database writer process (DBWn) background process.

— >> Control Files

Every Oracle database has a control file. A control file contains entries that specify the physical structure of the database. For example, it contains the following information:

Database name

Names and locations of datafiles and redo log files

Time stamp of database creation

Oracle can multiplex the control file, that is, simultaneously maintain a number of identical control file copies, to protect against a failure involving the control file.

Every time an instance of an Oracle database is started, its control file identifies the database and redolog files that must be opened for database operation to proceed. If the physical makeup of the database is altered (for example, if a new datafile or redo log file is created), then the control file is automatically modified by Oracle to reflect the change. A control file is also used in database recovery.

— >> Redo Log Files

Every Oracle database has a set of two or more redo log files. The set of redo log files is collectively known as the redo log for the database. A redo log is made up of redo entries (also called redo records).

The primary function of the redo log is to record all changes made to data. If a failure prevents modified data from being permanently written to the datafiles, then the changes can be obtained from the redo log, so work is never lost.

To protect against a failure involving the redo log itself, Oracle allows a multiplexed redo log so that two or more copies of the redo log can be maintained on different disks.

The information in a redo log file is used only to recover the database from a system or media failure that prevents database data from being written to the datafiles. For example, if an unexpected power outage terminates database operation, then data in memory cannot be written to the datafiles, and the data is lost. However, lost data can be recovered when the database is opened, after power is restored. By applying the information in the most recent redo log files to the database datafiles, Oracle restores the database to the time at which the power failure occurred.

Archive Log Files

You can enable automatic archiving of the redo log. Oracle automatically archives log files when the database is in ARCHIVELOG mode.

     —– >>> Overview of Logical Database Structures

The logical storage structures, including data blocks, extents, and segments, enable Oracle to have fine-grained control of disk space use.

Tablespaces

A database is divided into logical storage units called tablespaces, which group related logical structures together. For example, tablespaces commonly group together all application objects to simplify some administrative operations.

DBarchTablespce1

Each database is logically divided into one or more tablespaces. One or more datafiles are explicitly created for each tablespace to physically store the data of all logical structures in a tablespace. The combined size of the datafiles in a tablespace is the total storage capacity of the tablespace.

Every Oracle database contains a SYSTEM tablespace and a SYSAUX tablespace. Oracle creates them automatically when the database is created. The system default is to create a smallfile tablespace, which is the traditional type of Oracle tablespace. The SYSTEM and SYSAUX tablespaces are created as smallfile tablespaces.

Oracle also lets you create bigfile tablespaces. This allows Oracle Database to contain tablespaces made up of single large files rather than numerous smaller ones. This lets Oracle Database utilize the ability of 64-bit systems to create and manage ultralarge files. The consequence of this is that Oracle Database can now scale up to 8 exabytes in size. With Oracle-managed files, bigfile tablespaces make datafiles completely transparent for users. In other words, you can perform operations on tablespaces, rather than the underlying datafiles.

Online and Offline Tablespaces

A tablespace can be online (accessible) or offline (not accessible). A tablespace is generally online, so that users can access the information in the tablespace. However, sometimes a tablespace is taken offline to make a portion of the database unavailable while allowing normal access to the remainder of the database. This makes many administrative tasks easier to perform.

Oracle Data Blocks

At the finest level of granularity, Oracle database data is stored in data blocks. One data block corresponds to a specific number of bytes of physical database space on disk. The standard block size is specified by the DB_BLOCK_SIZE initialization parameter. In addition, you can specify up to five other block sizes. A database uses and allocates free database space in Oracle data blocks.

Extents

The next level of logical database space is an extent. An extent is a specific number of contiguous data blocks, obtained in a single allocation, used to store a specific type of information.

Segments

Above extents, the level of logical database storage is a segment. A segment is a set of extents allocated for a certain logical structure. The following table describes the different types of segments.

— >>  Overview of Parameter File, Server Parameter File & Password file

Parameter File (PFILE – INITSID.ORA):

Parameter files contain a list of configuration parameters for that instance and database. You can read or change this file. The file contains all Oracle parameters file to configure a database instance. In this file, you can reset and change the Buffer Cache size, Shared Pool size, Redo Log Buffer size, etc. You use this file to increase or decrease the size of System Global Area (SGA). You also can change the location of your control files, mode of a database such as archivelog mode or noarchivelog mode, and many other parameter options that you will learn them in the course of this book.

Server Parameter File (SPFILESID.ora):

This file is in binary format and you cannot read this file. You should create the Server Parameter file (CREATE SPFILE FROM PFILE) and startup your database using the spfile file, if you want to change database parameters dynamically. There are some few parameters that you still need to shutdown and startup the database, if you want to make the parameter in effect. You will learn all about these parameters in the course of this book.

Password file:

The Oracle orapwd command line utility assists the DBA with granting SYSDBA and SYSOPER privileges to other users.  By default, the user SYS is the only user that has these privileges that is required to use orapwd.  Creating a password file via orapwd enables remote users to connect with administrative privileges through SQL*Net.

$ orapwd file=filename  password=password entries=max_users

—- >>> STARTUP & SHUTDOWN PROCESSES

STARTUP Process:

            There are various stages which the database undergoes before a database is opened for use.

Nomount State:

This is the state when the control file, online redo log files and the database files are closed and are not accessible. The Oracle instance is available. Some of the V$ views (dynamic performance views) are available during this state.

 A database may be brought to this state to perform operations like.

a. Creating database.

b. Recreating controlfile.

Ex. V$SESSION, V$INSTANCE, V$DATABASE etc.

Mount State:

This is the next phase through which the database passes. During this stage, the control file is opened and the existence of all the databafiles and online redolog files is verified.

A database may be brought to this state to perform operations like

a) Backup

b) Recovery of the system or undo datafile

c) Change the database to archive log mode etc.

When you query V$DATABASE for the OPEN_MODE, we get the answer as MOUNTED.

Open State:

The database is opened. During this stage, the datafiles and the online redo log files are opened and are ready to use. Oracle doesn’t allow you to open the database if any of the datafile or online redo log file is missing or is corrupted.

A database may be opened is READ ONLY mode as well as in READ WRITE mode. The status may be found by querying V$DATABASE dynamic performance view. The query for this is as below

SQL>Select open_mode from v$database;

In the READ ONLY mode, the database may be queried but one cannot perform the DML operations.

Note:

Please note that to startup or shutdown the database, you need to logon as “SYS AS SYSDBA” or with any user account having SYSDBA role assigned.

There are other options available like one can start the database in a restricted mode. This is used during maintenance activities, upgrades etc. Only those users having RESTRICTED SESSION privilege can logon to the database.

Force – this option is used to startup the database when instance is aborted (Instance crash).

SHUTDOWN Process::

Oracle has these are shutdown modes mainly NORMAL, IMMEDIATE and ABORT.

SHUTDOWN NORMAL:

This is the default mode of shutting down the database. During this state, Oracle Server waits for all the users to disconnect.

SHUTDOWN TRANSACTIONAL:

Waits until all the transactions are completed and then shut down the database. During this state, no new connections are permitted.

SHUTDOWN IMMEDIATE:

This option will disconnect all the sessions; roll back all the running transactions and shutdown the database. During the next startup no instance recovery is needed.

Note:

No instance recovery needs to be initiated by smon process during startups for the database which have been brought down by normal, immediate or by transactional mode.

SHUTDOWN ABORT:

This option doesn’t rollback any transactions and simply brings down the database. In layman’s term, it just likes pulling the power plug of the television. Any subsequent database startup needs an instance recovery to be initiated by smon.

Any backup taken after shutting down the database in ABORT mode will not be consistent. It is recommended to use the first three methods to shutdown the database for a consistent backup.

Shajib Mahmud

Leave a Reply

Your email address will not be published. Required fields are marked *