Archive for the ‘Solaris’ Category

Fixing the ORA-27102: out of memory Error in Oracle on Solaris 10

April 8th, 2018, posted in Oracle Queries, Solaris
Share

Symptom:

As part of a database tuning effort you increase the SGA/PGA sizes; and Oracle greets with anORA-27102: out of memoryerror message. The system had enough free memory to serve the needs of Oracle.

SQL> startup
ORA-27102: out of memory
SVR4 Error: 22: Invalid argument

Diagnosis

$ oerr ORA 27102
27102, 00000, "out of memory"
// \*Cause: Out of memory
// \*Action: Consult the trace file for details

 

Not so helpful. Let’s look the alert log for some clues.

 

% tail -2 alert.log
WARNING: EINVAL creating segment of size 0x000000028a006000
fix shm parameters in /etc/system or equivalent

 

Oracle is trying to create a 10G shared memory segment (depends on SGA/PGA sizes), but operating system (Solaris in this example) responded with an invalid argument (EINVAL) error message. There is a little hint about setting shm parameters in/etc/system.

Prior to Solaris 10,shmsys:shminfo_shmmaxparameter has to be set in/etc/systemwith maximum memory segment value that can be created. 8M is the default value on Solaris 9 and prior versions; where as 1/4th of the physical memory is the default on Solaris 10 and later. On a Solaris 10 (or later) system, it can be verified as shown below:

 

% prtconf | grep Mem
Memory size: 32760 Megabytes
% id -p
uid=59008(oracle) gid=10001(dba) projid=3(default)
% prctl -n project.max-shm-memory -i project 3
project: 3: default
NAME    PRIVILEGE       VALUE    FLAG   ACTION                       RECIPIENT
project.max-shm-memory
        privileged      7.84GB      -   deny                                 -
        system          16.0EB    max   deny                                 -

 

Now it is clear that the system is using the default value of 8G in this scenario, where as the application (Oracle) is trying to create a memory segment (10G) larger than 8G. Hence the failure.

So, the solution is to configure the system with a value large enough for the shared segment being created, so Oracle succeeds in starting up the database instance.

On Solaris 9 and prior releases, it can be done by adding the following line to/etc/system, followed by a reboot for the system to pick up the new value.

set shminfo_shmmax = 0x000000028a006000Howevershminfo_shmmaxparameter was obsoleted with the release of Solaris 10; and Sun doesn’t recommend setting this parameter in/etc/systemeven though it works as expected.

On Solaris 10 and later, this value can be changed dynamically on a per project basis with the help of resource control facilities . This is how we do it on Solaris 10 and later:

 

% prctl -n project.max-shm-memory -r -v 10G -i project 3
% prctl -n project.max-shm-memory -i project 3
project: 3: default
NAME    PRIVILEGE       VALUE    FLAG   ACTION                       RECIPIENT
project.max-shm-memory
        privileged      10.0GB      -   deny                                 -
        system          16.0EB    max   deny                                 -

 

Note that changes made with theprctlcommand on a running system are temporary, and will be lost when the system is rebooted. To make the changes permanent, create a project withprojaddcommand and associate it with the user account as shown below:

 

% projadd -p 3  -c 'eBS benchmark' -U oracle -G dba  -K 'project.max-shm-memory=(privileged,10G,deny)' OASB
% usermod -K project=OASB oracle

 

Finally make sure the project is created withprojects -lorcat /etc/projectcommands.

 

% projects -l
...
...
OASB
        projid : 3
        comment: "eBS benchmark"
        users  : oracle
        groups : dba
        attribs: project.max-shm-memory=(privileged,10737418240,deny)
% cat /etc/project
...
...
OASB:3:eBS benchmark:oracle:dba:project.max-shm-memory=(privileged,10737418240,deny)

 

With these changes, Oracle would start the database up normally.

 

SQL> startup
ORACLE instance started.
Total System Global Area 1.0905E+10 bytes
Fixed Size                  1316080 bytes
Variable Size            4429966096 bytes
Database Buffers         6442450944 bytes
Redo Buffers               31457280 bytes
Database mounted.
Database opened.

 


 

Addendum : Oracle RAC settings

Anonymous Bob suggested the following settings for Oracle RAC in the form of a comment for the benefit of others who run into similar issue(s) when running Oracle RAC. I’m pasting the comment as is (Disclaimer: I have not verified these settings):

Thanks for a great explanation, I would like to add one comment that will help those with an Oracle RAC installation. Modifying the default project covers oracle processes great and is all that is needed for a single instance DB. In RAC however, the CRS process starts the DB and it is a root owned process and root does not use the default project. To fix ORA-27102 issue for RAC I added the following lines to an init script that runs before the init.crs script fires.

 

# Recommended Oracle RAC system params
ndd -set /dev/udp udp_xmit_hiwat 65536
ndd -set /dev/udp udp_recv_hiwat 65536
# For root processes like crsd
prctl -n project.max-shm-memory -r -v 8G -i project system
prctl -n project.max-shm-ids -r -v 512 -i project system
# For oracle processes like sqlplus
prctl -n project.max-shm-memory -r -v 8G -i project default
prctl -n project.max-shm-ids -r -v 512 -i project default

So simple yet it took me a week working with Oracle and SUN to come up with that answer…Hope that helps someone out.

Share

How to manually mount CD-DVD-ROM in Solaris

March 3rd, 2018, posted in Solaris, Uncategorized
Share

Add A User From The Command Line In Solaris,Add A User From The Command Line In Solaris 10,Add A User From The Command Line, In Solaris10 ,Add A User ,The Command Line In Solaris10,The Command Line In Solaris,solaris 10,

Recently while trying to build a Solaris JumpStart server, I encountered an error in reading the home-burnt DVD-ROM disk of Solaris 10 10/08.  There are many threads out on the internet that talk about the same difficulty and issue.  To circumvent the issue, I needed to manually mount the DVD-ROM.  To do this, you first need to disable the volmgmt software in Solaris.  Since I am running Solaris 10, the command is simple:

 

# svcadm disable volfs
# svcs volfs
STATE          STIME    FMRI
disabled        7:29:01 svc:/system/filesystem/volfs:default
# ps -ef |grep vol
    root 10189   936   0 07:26:10 pts/1       0:00 grep vol

 

Once that is done, you should be able to mount the DVD-ROM using the following command:

 

# mount -F hsfs -o ro /dev/sr0 /cdrom
# cd /cdrom
# ls
Copyright                    License                      boot                         platform
JDS-THIRDPARTYLICENSEREADME  Solaris_10                   installer

Once done, don’t forget to re-enable volume management.  That is done by:

 

# svcadm enable volfs
# svcs volfs
STATE          STIME    FMRI
online          8:33:21 svc:/system/filesystem/volfs:default
Share

adcfgclone.pl dbTechStack – RC-50004 Error occurred in CloneContext Target System Domain Name

February 4th, 2018, posted in Oracle, Solaris
Share

RC-50004: Error occurred in CloneContext – Target System Domain Name in

adcfgclone.pl dbTechStack :

Error:

Target System Domain Name : worths.co.za
RC-50004: Error occurred in CloneContext: 
null
Check Clone Context logfile
/ebsdbprd_app/oracle/product/ebsdbprd/db/tech_st/11.2.0/appsutil/clone/bin/CloneContext_0418120037.log for details.

ERROR: Context creation not completed successfully.
For additional details review the file /tmp/adcfgclone_4849738.err if present.


adcfgclone.pl,oracle apps,oracle dba,oracle apps dba,oracle,dba,oracle error,ora error,oracle errors,adcfgclone.pl dbTechStack RC-50004 Error occurred in CloneContext Target System Domain Name,Target System Domain Name,adcfgclone.pl dbTechStack,RC-50004 Error,RC-50004,Error,clone error,oracle clone error,apps error,database error,oracle database,database oracle error,database,APPS-domain-50004,APPS domain 50004,RC 50004,RC 50004 Error

 

Solution :
Ensure that your hostname is fully populated in the /etc/hosts file

Not like this:

3 $ grep ebsdbprdp7 /etc/hosts
19.36.148.157   ebsdbprdp7
w@ebsdbprdp7:/
4 $

Should look like this

3 $ grep ebsdbprdp7 /etc/hosts
19.36.148.157   ebsdbprdp7.domain.co.za  ebsdbprdp7
w@ebsdbprdp7:/
4 $
Share

For Multiple Server Connections – Single Putty Window – Multiple Tabs

January 28th, 2018, posted in Oracle, Solaris, Windows
Share

Is it possible to have a tabbed putty like firefox or internet explorer ?? Single Putty Window,Multiple Tabs,For Multiple Server Connections,Putty Window,Multiple Tab,Multiple Server Connections,Putty,Multiple Server,DBA,Oracle DBA,Oracle Database,Oracle,Oracle SQL,
Yes you can, check out below URL !!

http://www.thegeekstuff.com/2008/08/turbocharge-putty-with-12-powerful-add-ons-software-for-geeks-3/

Look for Putty connection Manager.

Share

How To Change Java Heap Size In R12 To Avoid Java.Lang.OutOfMemoryError: Java Heap Space Error?

December 24th, 2017, posted in Oracle, Solaris
Share

Recently we had encountered a situation, wherein end users where not able to get the R12 login page and
it just hangs, but all the opmn processess were up and running – adopmnctl.sh status gives status as
alive for all the components viz., HTTP, oafm, oacore & forms. The same environment was available and
end users were able to access without any isuses 15 minutes before. This error happens only when
multiple people login to the same page and perform simillar activity like employee self service form
etc. So where and what exactly could be the problem.

This is how we approached and resolved the issue.

First we checked apache error log. The following error was reported in the error log.

==============================================================
[warn] [client IP Address ] oc4j_socket_recvfull timed out
[error] [client IP Address ] mod_oc4j: There is no oc4j process (for destination: application://oacore) available to service request.
[error] [client  IP Address] mod_oc4j: request to OC4J [servername:OC4J AJP Port Range for Oacore]
failed: Connect failed
==============================================================

Above error message does give us a hint that the problem is with oacore, but as i said earlier the oacore is alive according to opmn.

Next, we verified the oacore log file.

($INST_TOP/logs/ora/10.1.3/opmn/oacore_default_group_1/oacorestd.err)

Error Message in the file

=======================================================
Exception in thread “OC4JMonitorThread” java.lang.OutOfMemoryError: Java heap space
=======================================================

While checking the error in file, parallely we opened another window to see the CPU and memory usage and we could see one java process was taking more than 100% CPU. This process was spawned by the opmn -d process and the process id didnt match with the oacore process id. Hint: adopmnctl.sh status will give the status as well the process id. It looked like a end client java process.

At this stage we had 3 options.

1. Kill the java process which is consuming high cpu.
2. Bounce oacore component
3. Adjust java (jvm) memory parameters

Each of the above options has its own implications and advantages. To minimize the downtime we decided  to kill the java processes, and the moment we killed the java process, all the browsers which were hanging reported instantaneously Internal Server Error. This proved to be a bad decision.

So we moved to the next option, bouncing the oacore service which we are sure will resolve the issue  (temporarily) and it did as expected. Basically when you bounce the services all the existing
connections and processess will be released which results in more available memory when you re-start the services.

Ok, now we tackled the problem and provided a temporary solution but we need to find a long term solution. This is option 3, adjusting java memory size.

Steps to change the heap size.

First, you need to identify how much is the maximum heap size that you can set. Click here.

Once you had identified the maximum heap size, we need to change the configuration files to make it
permanent.

Step 1: Edit opmn.xml file.

Location : $INST_TOP/ora/10.1.3/opmn/conf/

Open opmn.xml

Search for string Xms or Xmx or module-id=”OC4J”

This search should lead you to below location

‘<‘process-type id=”oacore” module-id=”OC4J” status=”enabled” working-dir=”$ORACLE_HOME/j2ee/home”‘>’
‘<‘category id=”start-parameters”‘>’
‘<‘data id=”java-options” value=”-server -verbose:gc -Xmx512M -Xms128M ……]

The default value for Maximum (-Xmx) and Minimum (-Xms) heap sizes are 512M and 128M respectively.

Again here you have options, you can set both Xms and Xmx has the same value as Xmx if you feel all your sessoins require higher memory or set a lower value for Xms and the maximum value for Xmx. Dont forget to change the values under ‘<‘category id=”stop-parameters”‘>’

opmn.xml also contains jvm configurations for other components – oafm & forms.

Step 2: Edit oc4j.properties file.

Location : ($INST_TOP/ora/10.1.3/j2ee/oacore/config)

This step is optional since we had already made changes in opmn.xml but there is no harm in making the change here. This step will come handy for troubleshooting specific components of Oracle viz., configurator, iSupplier or any other option which heavily utilizes/consumes CPU/memory.

Search for string Xms or Xmx or wrapper.

Option 1: If you find any of the above parameters change the values corresponding to the value you had  mentioned in opmn.xml or even more than that, as long as you dont exceed the maximum heap size limit.

Option 2: If you DO NOT find any of the above parameters, then make an entry like this, under the heading “# Java Object Cache Configuration Parameters”

wrapper.bin.parameters=-Xms[Value]M -Xmx[Value]M -XX:NewSize=256M -XX:MaxNewSize=256M

Step 3: Edit Applications Context file

vi $CONTEXT_FILE

Location: $INST_TOP/appl/admin/SID_hostname.xml

search for string s_oacore_jvm_start_options

Change Xms and Xmx value. Repeat the same step for parameter s_oacore_jvm_stop_options.

Changes made in Step 3 will take effect the next time you run autoconfig, whereas Step 1 & 2 changes will take effect the next time you bounce opmn services, but the values are not permanent in the sense these will be wiped off next time you run autoconfig. Yes you can preserve the changes by placing it inbetween Begin and End customizations.

You can also increase the Garbage Collection threads parameter (-XX:ParallelGCThreads) to a higher value if you have JDK 1.5 or more than 2 cpus or more memory. For more information on this you can refer to Metalink Note: 362851 – Guidelines to setup the JVM in Apps Ebusiness Suite 11i and R12.

Share