Posts Tagged ‘Oracle Database’

AutoConfig Failing With NegativeArraySizeException 11g Database

October 6th, 2019, posted in Oracle Queries
Share

I encountered Some issues. I encounter this issue while running autoconfig as post upgrade step.

Error stack while running autoconfig :

 

Updating Context file…COMPLETED

Attempting upload of Context file and templates to database...ERROR: 

InDbCtxFile.uploadCtx() : Exception : Error executng BEGIN fnd_gsm_util.upload_context_file(:1,:2,:3,:4,:5); END;: 1; Oracle error -29532: ORA-29532: Java call terminated by uncaught Java exception: java.lang.NegativeArraySizeException has been detected in FND_GSM_UTIL.upload_context_file.
oracle.apps.ad.autoconfig.oam.InDbCtxFileException: Error executng BEGIN fnd_gsm_util.upload_context_file(:1,:2,:3,:4,:5); END;: 1; Oracle error -29532: ORA-29532: Java call terminated by uncaught Java exception: java.lang.NegativeArraySizeException has been detected in FND_GSM_UTIL.upload_context_file.
at oracle.apps.ad.autoconfig.oam.InDbCtxFile.uploadCtx(InDbCtxFile.java:281)
at oracle.apps.ad.autoconfig.oam.CtxSynchronizer.uploadToDb(CtxSynchronizer.java:328)
at oracle.apps.ad.tools.configuration.FileSysDBCtxMerge.updateDBInf(FileSysDBCtxMerge.java:735)
at oracle.apps.ad.tools.configuration.FileSysDBCtxMerge.updateDBFiles(FileSysDBCtxMerge.java:224)
at oracle.apps.ad.context.CtxValueMgt.processCtxFile(CtxValueMgt.java:1663)
at oracle.apps.ad.context.CtxValueMgt.main(CtxValueMgt.java:709)
FAILED


Solution:

Run below query from apps user>

select is_compiled from user_java_methods where name = 'oracle/xml/parser/v2/SAXAttrList' and method_name='addAttr'
and arguments=7;

It should return “YES”, if it return YES then proceed further.

Execute following sql statement from RDBMS oracle home as apps user:

call dbms_java.set_native_compiler_option('optimizerThrowConversion', 'false');

Execute the following sql statement from RDBMS oracle home as sys user:

call dbms_java.set_native_compiler_option('optimizerThrowConversion', 'false');

 

Flush all JIT-compiled code from the database by using following statement as SYS:

truncate table java$mc$;
commit;

After this step restarted database and completed autoconfig without any issues.

Share

ORA-48913 Writing into trace file failed, file size limit 10485760 reached

October 1st, 2019, posted in Oracle EBS Application, Oracle Queries
Share

ORA-48913: Writing into trace file failed, file size limit [10485760] reached

ERROR:-
Non critical error ORA-48913 caught while writing to trace file
"/apps/PROD/db/diag/rdbms/prod/PROD/trace/PROD_dbrm_6874.trc"
Error message: ORA-48913: Writing into trace file failed, file size limit [10485760] reached.

In some environments DBAs limit the size of trace files generated by the database. This included all trace files that could get generated under USER_DUMP_DEST/DIAGNOSTIC_DEST). The parameter to set the limit for trace files is MAX_DUMP_FILE_SIZE and its value is in OS number of blocks. After setting this value, if any trace file size would increase form the size specified in this parameter, ORA-48913 would be recorded in alert log file.

 

Cause : 

The reason was Parameter MAX_DUMP_FILE_SIZE is set too low.

Solution :

We can increase the setting for the parameter MAX_DUMP_FILE_SIZE or set it to unlimited
MAX_DUMP_FILE_SIZE specifies the maximum size of trace files (excluding the alert file). Change this limit if you are concerned that trace files may use too much space.
A numerical value for MAX_DUMP_FILE_SIZE specifies the maximum size in operating system blocks.
A number followed by a K or M suffix specifies the file size in kilobytes or megabytes.
The special value string UNLIMITED means that there is no upper limit on trace file size. Thus, dump files can be as large as the operating system permits.


SQL> show parameter max_dump_file

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
max_dump_file_size string 20000


SQL> alter system set max_dump_file_size=UNLIMITED scope=both;

System altered.


SQL> show parameter max_dump_file

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
max_dump_file_size string UNLIMITED

 

OR

 

SQL> show parameter dump_file

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
max_dump_file_size string 20480

SQL> select max(lebsz) from x$kccle;

MAX(LEBSZ)
----------
512

SQL> alter session set max_dump_file_size=’1024M’;

Session altered.

SQL> show parameter max_dump_file_size

NAME TYPE VALUE
———————————— ———– ———–
max_dump_file_size string 1024M
Share

Without Editing tnsnames.ora file Creating a Database Link in Oracle

June 19th, 2019, posted in Oracle Queries
Share

Developers needed access to some objects from one schema to another using database link. To enable database link he tried to create entry in tnsnames.ora file but had a problem with insufficient permissions. As a developer he has limited privileges on Unix machines so he can’t edit and save tnsnames.ora file.

But there is solution for this little problem.
You can create functional database link without editing tnsnames.ora file.

 

Little Demo Case:

system@TEST11> select * from v$version;

BANNER
--------------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - Production
PL/SQL Release 11.1.0.7.0 - Production
CORE 11.1.0.7.0 Production
TNS for Linux: Version 11.1.0.7.0 - Production
NLSRTL Version 11.1.0.7.0 - Production

5 rows selected.


system@TEST11> select * from dba_db_links;

1. no rows selected

Create database link testlink_db2 using full tns entry:

system@TEST11> create database link testlink_db2
connect to system identified by oracle
using
'(DESCRIPTION=
(ADDRESS=
(PROTOCOL=TCP)
(HOST=10.2.10.18)
(PORT=1525))
(CONNECT_DATA=
(SID=test10)))'
/

Database link created.

Now little check and cleanout:

system@TEST11> select * from v$version@testlink_db2;

BANNER
----------------------------------------------------------------
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - Prod
PL/SQL Release 10.2.0.4.0 - Production
CORE 10.2.0.4.0 Production
TNS for Linux: Version 10.2.0.4.0 - Production
NLSRTL Version 10.2.0.4.0 - Production

5 rows selected.

-- cleanout
system@TEST11> drop database link testlink_db2;

Database link dropped.

 

From documentation:

http://download.oracle.com/docs/html/B13951_01/net.htm#i1153728

http:https://docs.oracle.com/cd/E18283_01/server.112/e17118/statements_5005.htm

server_name = (DESCRIPTION=
(ADDRESS=
(PROTOCOL=TCP)
(PORT=port_number)
(HOST=host_name)
)
(CONNECT_DATA=(SERVICE_NAME=service_name)
)
)

where:

server_name is the name of an Oracle server that matches an entry in the RDB directory. An entry in the RDB directory can be added using the ADDRDBDIRE command.

TCP is the TCP protocol used for TCP/IP connections.

port_number is the port number of the Oracle Net listener. This is usually port number 1521.

host_name is the name that defines the system where the target Oracle server resides. This name must be in the local host definition on the AS/400 or in a name server on your network. The host name can also be entered as an IP address, for example, 161.14.10.12.

service_name is the service name of the Oracle server.

Share

Oracle 11g – MEMORY_MAX_TARGET and MEMORY_TARGET

May 22nd, 2019, posted in Oracle Queries
Share

Adjusting the memory_max_target based on available memory. 
This example is Linux x86-64.

If you can afford to set the memory_max_target higher then the memory_target this will give you room to grow the memory_target without restarting the database. 

SQL> show parameters memory_target
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
memory_target big integer 17920M

SQL> show parameters memory_max_target
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
memory_max_target big integer 17920M

System has 36GB physical RAM available.

SQL> !grep MemTotal /proc/meminfo
MemTotal: 36912956 kB

Server has now set aside 24GB for use with Oracle. Kernel shared memory parameter.

SQL>!df -h /dev/shm/

Filesystem Size Used Avail Use% Mounted on
tmpfs 24G 11G 14G 44% /dev/shm

Increase memory_max_target to 24GB.

SQL> ALTER SYSTEM SET memory_max_target = 24G SCOPE=SPFILE;
System altered.

Shutdown Oracle.
SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.

Startup Oracle.
SQL> startup
ORACLE instance started.

Total System Global Area 2.5655E+10 bytes
Fixed Size 2213776 bytes
Variable Size 2.0133E+10 bytes
Database Buffers 5368709120 bytes
Redo Buffers 151166976 bytes
Database mounted.
Database opened.

Update your pfile.

SQL> create pfile from spfile;
File created.

Verify the new settings. Max now 24GB and memory target is 17.9GB. We can now increase the memory_target if the need arises without shutting the database down.

SQL> sho parameters memory_max_target
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
memory_max_target big integer 24G

SQL> sho parameters memory_target
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
memory_target big integer 17920M
Share

Table That Stores Cached Queries

November 14th, 2018, posted in Oracle Queries
Share

This error is caused by corruptions that have crept into a table that stores cached queries. The only thing to do in this situation is to delete that table and clear cache by bouncing the Apache server.

There is patch available that can reduce the chance of these corruptions from happening in the future.

Please do the following

CREATE TABLE fnd_lov_choice_values_bak AS
SELECT * FROM fnd_lov_choice_values
;

DELETE fnd_lov_choice_values;

COMMIT;

Clear the cache

Then bounce the Apache server.

Then please download and apply patch Patch:9527712:R12.FWK.B

Afterwords, please try you test again.

Share