Wednesday, January 28, 2026

How to recover table from flashback after truncate table in Oracle

 Tables dropped by Truncate TABLE can be restored using TO_TIMESTAMP and TO_SCN function.

Check if Flashback Database is enabled

SELECT flashback_on FROM v$database;

If result = YES, you can use flashback database.

Note: By default triggers is disable during the flashback recovery.

Following are the steps involved in flashback table with SCN or TIMESTAMP.

a. Enable the row movement for the table.

ALTER TABLE TEST ENABLE ROW MOVEMENT;

b. Flashback the table to SCN or timestamp.

FLASHBACK TABLE TEST TO SCN <scn_number>;
OR
FLASHBACK TABLE TEST TO TIMESTAMP TO_TIMESTAMP('2026-01-26 19:00:00','YYYY-MM-DD HH24:MI:SS');

Post‑Steps:

Verify the data:

SELECT COUNT(*) FROM TEST;

Disable the row movement:

ALTER TABLE TEST DISABLE ROW MOVEMENT;

Recompile the dependent object if they are invalid:

SELECT object_name, status 
FROM user_objects 
WHERE object_type IN ('VIEW','TRIGGER') AND status <> 'VALID';

-- Compile command for objects:
ALTER VIEW my_view COMPILE;
ALTER TRIGGER my_trigger COMPILE;
ALTER PROCEDURE my_procedure COMPILE;
ALTER FUNCTION my_function COMPLILE;

Friday, January 16, 2026

Autonomous Health Framework (AHF)

 To get started with Autonomous Health Framework Insights run:

#ahf analysis create –type insights –last 3h
#tfactl summary

#tfactl analyze -last 1d

To find all occurrences of a specific error  (in this case ORA-00600 errors)

#tfactl analyze -search “ora-00600” -last 8h

Enable or disable Automatic collections (ON by default)

tfactl set autodiagcollect=OFF

Adjusting the Diagnostic Data Collection Period

###Purge log Configure automatic log purging

tfactl set manageLogsAutoPurge=ON

ahfctl compliance

tfactl run managelogs -show usage

tfactl purge -older 30d # To remove file(s) older than 30 days.

tfactl purge -older 10h # To remove file(s) older than 10 hours.

# tunning PGA_AGGREGATE_LIMIT#####

 

# tunning PGA_AGGREGATE_LIMIT#####

WITH
MAX_PGA as
(select round(value/1024/1024,1) max_pga from v$pgastat where name=’maximum PGA allocated’),
MGA_CURR as
(select round(value/1024/1024,1) mga_curr from v$pgastat where name=’MGA allocated (under PGA)’),
MAX_UTIL as
(select max_utilization as max_util from v$resource_limit where resource_name=’processes’)
SELECT
a.max_pga “Max PGA (MB)”,
b.mga_curr “Current MGA (MB)”,
c.max_util “Max # of processes”,
round(((a.max_pga – b.mga_curr) + (c.max_util * 5)) * 1.1, 1) “New PGA_AGGREGATE_LIMIT (MB)”
FROM MAX_PGA a, MGA_CURR b, MAX_UTIL c
WHERE 1 = 1;
Referent:
In Oracle release 18c and above: Sizing the PGA in Oracle 19c – How to Account for the MGA Size (Doc ID 2808761.1)
Note: To reduce PGA usage and avoid hitting the PGA_AGGREGATE_LIMIT, you can reduce the value of the PROCESSES parameter based on your system requirement.

Compile invalid objects

 script generate compile invalid object in oracle database

SELECT owner, COUNT(*) FROM dba_objects WHERE status = ‘INVALID’ GROUP BY owner;

Compile by utlrp.sql scrip t

@?/rdbms/admin/utlrp.sql Or

EXEC UTL_RECOMP.recomp_parallel(4, ‘REPORT’); Or

you can generate scripts for specific object name

select ‘alter package ‘||owner||’.’||object_name||’ compile;’ from dba_objects where object_type=’PACKAGE’ and status=’INVALID’;

select ‘alter package ‘||owner||’.’||object_name||’ compile body;’ from dba_objects where object_type=’PACKAGE BODY’ and status=’INVALID’;

select ‘alter synonym ‘||owner||’.’||object_name||’ compile;’ from dba_objects where object_type=’SYNONYM’ and status=’INVALID’;

select ‘alter public synonym ‘||object_name||’ compile;’ from dba_objects where object_type=’SYNONYM’ and status=’INVALID’;

select ‘alter trigger ‘||owner||’.’||object_name||’ compile;’ from dba_objects where object_type=’TRIGGER’ and status=’INVALID’;

select ‘alter view ‘||owner||’.’||object_name||’ compile;’ from dba_objects where object_type=’VIEW’ and status=’INVALID’;

select ‘alter PROCEDURE ‘||owner||’.’||object_name||’ compile;’ from dba_objects where object_type=’PROCEDURE’ and status=’INVALID’;

select ‘alter type ‘||owner||’.’||object_name||’ compile;’ from dba_objects where object_type=’TYPE BODY’ and status=’INVALID’;

Tuesday, January 6, 2026

ora-16665 timeout waiting for the result from a database

 The ORA-16665: timeout waiting for the result from a database error indicates that the Oracle Data Guard broker timed out while trying to communicate with a remote member database. This is typically a network issue or a problem with the remote instance. 

Causes
The primary causes for this error are:
  • The network call to the remote database did not complete in a timely manner (network latency or firewall issues).
  • The remote database was unable to execute the command due to an instance failure or performance issues. 
Troubleshooting and Resolution
To resolve the ORA-16665 error, follow these steps:
  1. Check Data Guard Broker Logs: Examine the Data Guard broker log files (drc*.log files in the trace directory) on both the primary and standby sites for detailed error messages that provide context on the specific failure.
  2. Verify Network Connectivity:
    • Ensure proper network communication (e.g., ping) is working properly between all members of the Data Guard configuration.
    • Use tnsping to check the reachability of the TNS service names used in the configuration from both the primary and standby servers.
    • Confirm that required ports (typically 1521, the default listener port) are open and not blocked by firewalls in both directions.
  3. Check Database Status: Ensure all database instances in the Data Guard configuration are running and healthy. A failed instance at the remote end can cause this timeout.
  4. Increase the Communication Timeout: If the network is simply experiencing high latency rather than a complete failure, you can increase the timeout values using the DGMGRL command-line interface.
    • The relevant properties are CommunicationTimeout (default is 180 seconds or 3 minutes) and OperationTimeout (default is 30 seconds).
    • Connect to DGMGRL and use the following command to increase the timeout (e.g., to 600 seconds or 10 minutes):
    sql
    DGMGRL> EDIT CONFIGURATION SET PROPERTY CommunicationTimeout = 600;
    DGMGRL> EDIT CONFIGURATION SET PROPERTY OperationTimeout = 600;
    
  5. Reissue the Command: After checking the logs, fixing network issues, or increasing the timeout, reissue the Data Guard broker command that initially failed. 

ora 16778 redo transport error

 ORA-16778: redo transport error means the Data Guard broker can't send redo logs to standby(s), usually due to network issues, incorrect TNS/listener settings, missing executables, or parameter conflicts, requiring checks in the Alert Log, Broker Log, v$archive_dest_status, and confirming TNS entries on both primary/standby, often resolved by fixing TNS connectivity or restarting processes. 

Common Causes & Solutions:
  1. Network Connectivity/TNS Issues:
    • Check: tnsping from primary to standby and vice-versa; check tnsnames.ora and listener.ora for correct hostnames/IPs/ports.
    • Fix: Ensure hostnames resolve, TNS entries match, and listeners are running on both sides.
  2. Data Guard Broker Configuration Errors:
    • Check: Use DGMGRL > SHOW CONFIGURATION and SHOW DATABASE for detailed status, check broker log (location varies by Oracle version).
    • Fix: Sometimes a simple restart or re-adding the standby via DGMGRL (after cleaning up duplicates) can resolve broker state issues.
  3. Missing or Incorrect Executables/Parameters:
    • Check: Verify necessary Oracle executables (like arcn/arcf for LGWR/ARCn processes) exist and permissions are correct on the standby.
    • Fix: Ensure LOG_ARCHIVE_DEST_n parameters are correctly set on the primary to point to the standby, and FAL_SERVER/FAL_CLIENT are set on both.
  4. Primary/Standby State Mismatch:
    • Check: In DGMGRL, see if the standby is disconnected (ORA-16857).
    • Fix: Try ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL; and then restart recovery, or even SHUTDOWN and STARTUP the standby. 
Diagnostic Steps:
  1. Check Alert Logs: Look for related errors like ORA-12541 (no listener) or network errors in both primary and standby alert_<sid>.log files.
  2. Query V$ARCHIVE_DEST_STATUS: On the primary, check this view for specific errors on the standby destination.
  3. Use DGMGRL: SHOW CONFIGURATIONSHOW DATABASESHOW LOGFILE for status.
  4. Verify TNS: Ensure connectivity is solid with tnsping. 

Golden Gate – Extract and Replicat Checkpoint

 In Oracle GoldenGate, checkpoints in the Extract and Replicat processes ensure data consistency and fault tolerance. They record read/write positions, allowing processes to resume after failures without data loss. Checkpoints can be stored in checkpoint tables or files, helping optimize performance and monitor replication progress effectively.

Understanding Checkpoints in Oracle GoldenGate Extract and Replicat

In Oracle GoldenGate, a checkpoint in the Extract process is a mechanism used to record the current read and write positions in the data stream. This ensures data consistency, fault tolerance, and recovery in case of failures.

 What is a Checkpoint in Extract?

checkpoint is a record of the current position in the source database’s transaction log (like the redo log in Oracle or transaction log in SQL Server) that the Extract process has read and processed.

🧩 Purpose of Checkpoints

  1. Fault Tolerance: If the Extract process stops or crashes, it can resume from the last checkpoint instead of starting over.
  2. Data Consistency: Ensures that no transactions are missed or duplicated.
  3. Performance Optimization: Helps manage memory and disk usage by purging old data that has already been processed.

📌 Types of Checkpoints in Extract

  1. Read Checkpoint:
    • Marks the position in the transaction log where Extract last read.
    • Ensures Extract knows where to resume reading after a restart.
  2. Write Checkpoint:
    • Marks the position in the trail file where Extract last wrote data.
    • Ensures that data is not written twice or skipped.

 Where Are Checkpoints Stored?

  • Checkpoint Table (recommended): A table in the database that stores checkpoint information.
  • Checkpoint Files: Local files on disk used when a checkpoint table is not configured.

🔄 How It Works (Simplified Flow)

  1. Extract reads from the source database log.
  2. It processes the data and writes it to a trail file.
  3. After writing, it updates the checkpoint to reflect the new position.
  4. If Extract is restarted, it resumes from the last checkpoint.

Check the restore point with the following command:

Info all   -- Give all process running
OR 

INFO <extract_name> showch

OR 

INFO EXTRACT <extract_name>, SHOWCH

Give information as:

  • Current read and write checkpoint positions
  • Trail file details
  • Recovery checkpoint
  • Oldest unprocessed transaction
  • Checkpoint table (if used)

What is a Replicat Checkpoint?

Replicat checkpoint records the position in the trail file from which the Replicat process last successfully applied a transaction to the target database.

🧩 Purpose of Replicat Checkpoints

  1. Recovery: If Replicat stops or crashes, it can resume from the last checkpoint without reapplying already committed transactions.
  2. Data Integrity: Prevents duplication or loss of data during replication.
  3. Monitoring: Helps administrators track replication lag and performance.

Types of Checkpoints in Replicat

  1. Read Checkpoint:
    • The position in the trail file that Replicat has read up to.
  2. Write Checkpoint:
    • The position in the target database where Replicat has successfully applied changes.

Where Are Replicat Checkpoints Stored?

  • Checkpoint Table (recommended): A table in the target database.
  • Checkpoint Files: Local files on disk (used if checkpoint table is not configured). Checkpoint files on disk in the dirchk sub directory of the Oracle Golden Gate directory.

Use this command in GGSCI:

INFO REPLICAT <replicat_name>, SHOWCH

Note: Replicat keeps track of the last CSN/SCN from the source that has been applied to the target using a checkpoint.

Check checkpoint table at target database:

SELECT GROUP_NAME,GROUP_KEY,LAST_UPDATE_TS,LOG_CSN,LOG_CMPLT_CSN from gguser.chkpt;