Happy Birthday I am Not A Subtle Person

March 7th, 2024, posted in COMiCS, DAtEs iN a YeAR, Uncategorized
Share
Share

ORA-27037: Unable To Obtain File Status

February 4th, 2024, posted in Oracle Queries
Share
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of backup plus archivelog command at 12/24/2023 09:23:19
RMAN-06059: expected archived log not found, lost of archived log compromises recoverability
ORA-19625: error identifying file /u03/archive2/1_4134_983485279.dbf
ORA-27037: unable to obtain file status
SVR4 Error: 2: No such file or directory
Additional information: 3

This ORA-19625 errors are related with file specified as input to a copy or backup operation, or as the target for an incremental restore, could not be identified as an Oracle file. An OS-specific error accompanies this error to help pinpoint the problem.
To solve this error, Specify an different file and retry the operation.
Try to run the following commands.

RMAN> crosscheck copy of archivelog all
RMAN> crosscheck archivelog all
RMAN> resync catalog
RMAN> delete force obsolete;
RMAN> delete expired archivelog all ;

If the problem is not solved,then Files were deleted at OS level and Archive log files were deleted at OS level. To solve this error, Run the following commands.

RMAN> Change archivelog '' UNCATALOG ;

Please note the first archive log name would be present in the error message ORA-19625:
For example :-

RMAN-06059: expected archived log not found, lost of archived log
ORA-19625: error identifying file /var/opt/arch_oltp28_1_744738.arc
ORA-27037: unable to obtain file status
RMAN> Change archivelog '/var/opt/arch_oltp28_1_744738.arc' uncatalog;

Run the archive log backup command check if you still get the error
Keeping specify the archive log file name reported in ORA-19625 till backup of archive log goes fine
Or

RMAN> Change Archivelog all Uncatalog ;

Please note the above command will uncatalog the information about the Archive log from catalog database.

Share

Error: Your “crontab” on unexpected end of line. This entry has been ignored

January 20th, 2024, posted in Linux OS, Uncategorized
Share

Everytime you edit your crontabe file, the error “Your “crontab” on <server> unexpected end of line. This entry has been ignored” is sent to the users email. This happens if there is a blank line in your crontab file.


For instance, in the following crontab file there is a blank line between the last two cron jobs.

root@sunsolaris# crontab -l
# The root crontab should be used to perform accounting data collection.
#
# The rtc command is run to adjust the real time clock if and when
# daylight savings time changes.
#
10 1 * * 0,4 /etc/cron.d/logchecker
10 2 * * 0  /usr/lib/newsyslog
15 3 * * 0 /usr/lib/fs/nfs/nfsfind

30 4 * * * /usr/local/bin/disk_check,sh
;;;;
;;;
;;
;

To resolve the problem edit the crontab file and look for the blank line and delete the line. In the above, after editing the crontab, it should look lie the follows:

root@sunsolaris# crontab -l
# The root crontab should be used to perform accounting data collection.
#
# The rtc command is run to adjust the real time clock if and when
# daylight savings time changes.
#
10 1 * * 0,4 /etc/cron.d/logchecker
10 2 * * 0  /usr/lib/newsyslog
15 3 * * 0 /usr/lib/fs/nfs/nfsfind
30 4 * * * /usr/local/bin/disk_check,sh
;;;
;;
;

You can see the  blank line removed from the crontab file.

Share

Memory missing in Solaris 10 with ZFS

January 7th, 2024, posted in Solaris
Share

Ever wondered all the precious memory installed on your server has gone? Among many other reasons, if you are running Solaris 10 and use ZFS file system then there may be your answer.

ZFS Adaptive Replacement Cache (ARC) tends to use up to 75% of the installed physical memory on servers with 4GB or less and upto everything except 1GB of memory on servers with more than 4GB of memory to cache data in a bid to improve performance.

This can significantly affect performance on mission critical servers running Databases etc.

To identify how much memory uses:

# kstat -m zfs | grep size

        data_size                       18935877120
        hdr_size                        66041496
        l2_hdr_size                     0
        l2_size                         0
        other_size                      11310112
        size                            19013228728

Here “19013228728” (approx 18G) indicates the total memory used by ZFS.

Alternatively, the following mdb command show ZFS ARC usage:

# echo "::arc" | mdb -k|grep size
size                      =      2048 MB
hdr_size                  =  12493584
data_size                 = 2048608256
other_size                =  86475456
l2_size                   =         0
l2_hdr_size               =         0

It makes sense to cap the maximum ZFS ARC can use on servers where memory requirement for other services is more.

To set the maximum limit for ZFS ARC, edit /etc/system file and add the following line

set zfs:zfs_arc_max=2147483648

where 2147483648 restricts the usage to a maximum of 2GB physical memory. Unfortunately, this requires a reboot for the setting to take effect and cannot be dynamically changed.

Share

Happy New Year By Batman And Joker

December 31st, 2023, posted in COMiCS, DC Comic And Movies
Share

Happy New Year By Batman And Joker

Share