DBA Blogs
Tom,
I create databases then I run the catalog.sql and catproc.sql. Sometimes, I donot run pupbld.sql. Users may get warning message but they could login and work.
But, My friend says that if pupbld.sql is not run as system then users will get the error messages and they cannot log into the database at all. Is it true.
Is it a must to run the pupbld.sql. I could not see in the documentation, whether it is a must. If, it is a must, how I am able to login.
Is this being called by anyother script like catalog.sql, catproc.sql. I grepped both the files for pupbld.sql. It does not exist.
Please clarify.
Regards
Ravi
According Metalink note 785347.1 it seems possible to have a dataguard with primary 11.2 and standby 12.2 or even later but it is really very condensed.
Could you please just confirm that 11.2 -> 12.2 is really possible?
If so, what about 11.2 -> 19.x ?
Or 12.2 -> 19.x ?
Of course the idea is to upgrade to a later version with a very short downtime, after having switched to the newer version the old one would be discarded and the dataguard no longer used.
Best regards
Mauro
How to call rest api which accept
x-www-form-urlencoded in PL/SQL procedure in Apex
I am calling
https://api.textlocal.in/docs/sendsms
When i try to run this code:
DECLARE
STUDENT_ID NUMBER;
BEGIN
-- Generate the next value for the sequence
SELECT LMS_STUDENT_DETAILS_SEQ.nextval;
-- Insert data into LMS_STUDENT_DETAILS table
INSERT INTO LMS_STUDENT_DETAILS (STUDENT_ID, STUDENT_NAME, GENDER, DATE_OF_BIRTH, COURSE, CONTACT_NUMBER, DEPARTMENT)
VALUES (STUDENT_ID, :P6_STUDENT_NAME, :P6_GENDER, :P6_DOB, :P6_COURSE, :P6_CONTACT_NO, :P6_DEPARTMENT);
-- Insert data into LMS_BORROWER table
INSERT INTO LMS_BORROWER (BORROWER_ID, ENTITY_OWNER_FK, ENTITY_TYPE)
VALUES (LMS_BORROWER_SEQ.nextval, STUDENT_ID, 'STUDENT');
END;
I faced this error:
ORA-06550: line 1, column 106: PLS-00103: Encountered the symbol "DECLARE" when expecting one of the following: ( - + case mod new not null <an identifier> <a double-quoted delimited-identifier> <a bind variable> continue avg count current exists max min prior sql stddev sum variance execute forall merge time timestamp interval date <a string literal with character set specification> <a number> <a single-quoted SQL string> pipe <an alternatively-quoted string literal with character set
I have a user defined database package which is used quite heavily. When I need to update the code body, I will get several
<code>ORA-04061: existing state of package body "CS.PACKAGE" has been invalidated
ORA-04065: not executed, altered or dropped package body "CS.PACKAGE"
ORA-06508: PL/SQL: could not find program unit being called: "CS.PACKAGE"
ORA-06512: at "CS.PROCEDURE", line 228</code>
We are using a connection pool. How do I put the changes into PACKAGE, without getting several of the above errors?
I cannot control the use of the package, and it is very heavily used.
This video is a hands-on step-by-step primer about how to quantize any model using Hugging Face Quanto which is a versatile pytorch quantization toolkit.
!pip install transformers==4.35.0 !pip install quanto==0.0.11 !pip install torch==2.1.1 !pip install sentencepiece==0.2.0
model_name = "google/flan-t5-small"
import sentencepiece as spm from transformers import T5Tokenizer, T5ForConditionalGeneration tokenizer = T5Tokenizer.from_pretrained("google/flan-t5-small")
model = T5ForConditionalGeneration.from_pretrained("google/flan-t5-small")
input_text = "Meaning of happiness is " input_ids = tokenizer(input_text, return_tensors="pt").input_ids
outputs = model.generate(input_ids) print(tokenizer.decode(outputs[0]))
from helper import compute_module_sizes module_sizes = compute_module_sizes(model) print(f"The model size is {module_sizes[''] * 1e-9} GB")
from quanto import quantize, freeze import torch
quantize(model, weights=torch.int8, activations=None) freeze(model)
module_sizes = compute_module_sizes(model) print(f"The model size is {module_sizes[''] * 1e-9} GB")
input_text = "Meaning of happiness is " input_ids = tokenizer(input_text, return_tensors="pt").input_ids outputs = model.generate(input_ids) print(tokenizer.decode(outputs[0]))
This video is a hands-on step-by-step primer about how to use RAG with Open AI File Search. OpenAI now supports RAG which means that now you can attach your own files and custom data to OpenAI assistant and talk to your documents with GPT4. Make sure you have installed latest version of openai on your local system: pip install openai --upgrade also make sure to have data.txt in the same folder as your script. from openai import OpenAI client = OpenAI() assistant = client.beta.assistants.create( name="Personal Assistant", instructions="You are an empathetic. Use you knowledge base to answer questions.", model="gpt-4-turbo", tools=[{"type": "file_search"}], ) # Create a vector store caled "Personal Data" vector_store = client.beta.vector_stores.create( name="Personal Data", expires_after={ "anchor": "last_active_at", "days": 1 } ) # Ready the files for upload to OpenAI file_paths = ["data.txt"] file_streams = [open(path, "rb") for path in file_paths] # Use the upload and poll SDK helper to upload the files, add them to the vector store, # and poll the status of the file batch for completion. file_batch = client.beta.vector_stores.file_batches.upload_and_poll( vector_store_id=vector_store.id, files=file_streams ) # You can print the status and the file counts of the batch to see the result of this operation. print(file_batch.status) print(file_batch.file_counts) assistant = client.beta.assistants.update( assistant_id=assistant.id, tool_resources={"file_search": {"vector_store_ids": [vector_store.id]}}, ) thread = client.beta.threads.create( tool_resources={"file_search": {"vector_store_ids": [vector_store.id]}}, messages=[ { "role": "user", "content": "Who is Fahd Mirza?", } ] ) # The thread now has a vector store with that file in its tool resources. print(thread.tool_resources.file_search) run = client.beta.threads.runs.create_and_poll( thread_id=thread.id, assistant_id=assistant.id ) messages = list(client.beta.threads.messages.list(thread_id=thread.id, run_id=run.id)) message_content = messages[0].content[0].text annotations = message_content.annotations citations = [] for index, annotation in enumerate(annotations): message_content.value = message_content.value.replace(annotation.text, f"[{index}]") if file_citation := getattr(annotation, "file_citation", None): cited_file = client.files.retrieve(file_citation.file_id) citations.append(f"[{index}] {cited_file.filename}") print(message_content.value) print("\n".join(citations))
Function calling in AI simply means that you can call external APIs from within your AI-powered application. Whenever you read that a model can do function calling, it means that it can take a natural language query of user and convert it to a function call. Then you can execute that function call to your API endpoint to get the data, and give it to LLM as additional context and get more grounded latest response as per your application requirement.
import openai def Get_Beach_Patrol_Info(name, location): if 'Bondi' in name: print(f"The beach {name} at {location} is patrolled.") elif 'Marley' in name: print(f"The beach {name} of {location } is unpatrolled.") else: print(f"Beach not found.") def get_gorilla_response(prompt="", model="gorilla-openfunctions-v2", functions=[]): openai.api_key = "EMPTY" # Hosted for free with from UC Berkeley openai.api_base = "http://luigi.millennium.berkeley.edu:8000/v1" try: completion = openai.ChatCompletion.create( model="gorilla-openfunctions-v2", temperature=0.0, messages=[{"role": "user", "content": prompt}], functions=functions, ) return completion.choices[0] except: print("error occurred.") beach_custom_functions = [ { 'name': 'Get_Beach_Patrol_Info', 'description': 'Get name and location of beach from the body of the input text', 'parameters': { 'type': 'object', 'properties': { 'name': { 'type': 'string', 'description': 'Name of the beach' }, 'location': { 'type': 'string', 'description': 'Location where beach is located.' } } } } ] beach_1_description="""Bondi Beach is an iconic destination located in Sydney, Australia, renowned for its golden sands and lively atmosphere. It attracts surfers, sunbathers, and tourists alike, offering stunning ocean views and a vibrant coastal culture.""" beach_2_description="""Marley Beach (also known as Big Marley Beach) is a stunning beach in the upper Royal National Park, on the outskirts of southern Sydney, Australia. The beach is one of eleven beaches located within the territory of the Royal National Park.""" beach_description=[beach_1_description,beach_2_description] for i in beach_description: response=get_gorilla_response(prompt=i, functions=beach_custom_functions) func=response.message.content print(func) eval(func)
Hi, Tom. Please see below script.
<code>create table t0326 (id number, num varchar2(100));
declare
v_empno number:=125854437665589038536841445202964995521300;
begin
dbms_output.put_line('v_empno -- ' || v_empno);
dbms_output.put_line('to_char(v_empno) -- '|| to_char(v_empno));
insert into t0326 values(10, to_char(v_empno));
commit;
end;
/
v_empno -- 125854437665589038536841445202964995521300
to_char(v_empno) -- 125854437665589038536841445202964995521300
select * from t0326;
ID NUM
---------- ------------------------------------------------------------
10 1.2585443766558903853684144520296500E+41
declare
v_empno number:=125854437665589038536841445202964995521300;
v_s_empno varchar2(100);
begin
v_s_empno := to_char(v_empno);
dbms_output.put_line('v_empno -- ' || v_empno);
dbms_output.put_line('to_char(v_empno) -- '|| to_char(v_empno));
dbms_output.put_line('v_s_empno -- '|| v_s_empno);
insert into t0326 values(20, to_char(v_empno));
insert into t0326 values(30, v_s_empno);
insert into t0326 values(40, to_char(v_empno, 'FM999999999999999999999999999999999999999999999999999999999'));
commit;
end;
/
v_empno -- 125854437665589038536841445202964995521300
to_char(v_empno) -- 125854437665589038536841445202964995521300
v_s_empno -- 125854437665589038536841445202964995521300
select * from t0326;
ID NUM
---------- -----------------------------------------------------------------------
10 1.2585443766558903853684144520296500E+41
20 1.2585443766558903853684144520296500E+41
30 125854437665589038536841445202964995521300
40 125854437665589038536841445202964995521300 </code>
It display normal when "to_char(v_empno)" in dbms_output.put_line. But insert to database convert to scientific notation.
I try two solutions to solve this problem. Please see below.
1. use a variable to store to_char(v_empno), then insert this varaible to database.
2. use to_char(xx, FMT) to control the format.
I wonder why "to_char(v_empno)" in dbms_output.put_line is not scientific notation ? why add a temp variable could solve this problem ?
Hi,
I used explain plan and got the following results. Based on cost and time, does query 2 perform significantly better than query 1? The runtime for query 1 is approximately 1 minute and 40 seconds, but it shows 07:47:02. Why is the estimated time so different from the actual? Your help is much appreciated!
Query 1:
<code>------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 71730 | 2241K| 717M (1)| 07:47:02 |
|* 1 | TABLE ACCESS FULL| TBL1 | 71730 | 2241K| 717M (1)| 07:47:02 |
------------------------------------------------------------------------------</code>
Query 2:
<code>------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 71730 | 2241K| 51028 (1)| 00:00:02 |
|* 1 | TABLE ACCESS FULL| TBL1 | 71730 | 2241K| 51028 (1)| 00:00:02 |
------------------------------------------------------------------------------</code>
We recently had a database failure that resulted in data loss after an Oracle 19.3.0.0.0 database had both both its control, and redo log files deleted. Please note that I am not a DBA, but simply an analyst that supports the system that sits on this Oracle database. Any amount of data loss is fairly serious, and I am wondering how we avoid this in the future.
Before the control, and redo files were deleted, we had an event wherein the drive this database is on was full. This caused the database stop writing transactions, and disallowed users from accessing the application. Once space was made on this drive, the database operated normally for several hours until...the redo, and control files were deleted.
What would have caused the control, and redo files to be deleted?
In trying to figure out what happened, it was noted that if we had expanded the drive's memory in response to its becoming full, the later data loss would not have happened. Does Tom agree with that sentiment? Are these two events linked (disk drive nearly full and later data loss), or are they symptomatic of two different things?
Is 'SELECT * FROM :TABLE_NAME;' available?
Hai all,
I have 1000 number of tables. some of the tables got delete rows and updated the fragmentaion is created.
How to determine which tables are fragmented ?
I am migrating 11g database cross endianness from on-prem to EXACS . On-prem database db_block_size is 4k and all the tablespaces are also of 4k block size . <u>Since, I cannot provision non-standard block size database in OCI</u> , I am worried about the performance impact caused by different block size. Please help me understand what database block size would be recommended for the below scenario.
<code>
-----------------------------------------------------------
Source : ON_PREM
-----------------------------------------------------------
Platform / ID : AIX-Based Systems (64-bit) / 6
Version : 11.2.0.4.0
Size (GB) : 17 TB
db_block_size : 4k
All Tablespaces BLK Size : 4k
-----------------------------------------------------------
Target : OCI - EXACS
-----------------------------------------------------------
Platform / ID : LINUX / 13
Version : 11.2.0.4.0
Size (GB) : 17 TB
db_block_size : 8K
APP Tablespaces BLK Size : 4k
SYSTEM/SYSAUX/TEMP/UNDO : 8K
</code>
Phase 1: Migrating from AIX 11g to EXACS 11g
Phase 2: 19c upgrade and Multi tenant {<i>Due to business requirement we have to split migration and upgrade</i>}
<b>Question : </b>
1. Can we guarantee that there will be no performance impact due to difference in tablespace and database block size if db_4k_cache_size parameter is set adequately to large value .
2. Or Better to go for same 4k block size as source on-premises database.
Off course application regression testing and RAT will be included , but testing both cases is not feasible, hence reaching for expert advice .
Hello Sir,
I am able to get one scenario to work and that scenario was where I had a VM (server) running Oracle 19c with just 1 table 5 records and I did a backup of the whole VM (disk backup) and now I added a new table in my db with 3 records (ensured db is in Archivelog mode) and then I ran:
rman target /
backup database plus archivelog;
Now I went ahead and added 2 more records and noted the system time lets say **2024-04-06 15:33:55 ** (so I can restore upto this time). So basically a new table with 5 records. Once all this done I ran below command:
backup incremental level 1 database plus archivelog;
Now I deleted my VM and restored the first Old copy of my VM backup (one that had 1 table n 5 records), post this VM restore. I followed the steps below and I was able to get Point in time recovery to work up to 2024-04-06 15:33:55 (here now I should have 2 tables each with 5 records each). The main step which I had missed earlier was RESTORING the control file since I was doing restore on a different (new VM) server:
sql>> shutdown abort;
rman<code>>> startup nomount;
rman >>RESTORE CONTROLFILE FROM "/mnt/orabkup1/snapcf_ev.f";
rman>> startup mount;
rman >>run
{
SET UNTIL TIME "TO_DATE('2024-04-06 15:33:55', 'YYYY-MM-DD HH24:MI:SS')";
RESTORE DATABASE;
RECOVER DATABASE;
sql `ALTER DATABASE OPEN RESETLOGS?;
}</code>
Everything good here and with this approach I was able to get the Point in time recovery to work. I was missing that restore of the control file.
Now the scenario which I am still not able to work out and I am sure I am making a very basic mistake (may be I dont understand the archivelog and redolog properly).
The scenario I want to make work is : I have VM backup (disk backup) upto a level of 1 Table and 5 records. Then I create 2nd table and add lets say 2 records to it and this time I only take ARCHIVELOG backup and then add 3 more records and then backup incremental archivelog all and I note the time (lets assume '2024-04-06 15:33:55) with following steps:
<code>
backup archivelog all;
insert into xxx VALUES(3,'Line 1');
insert into xxx VALUES(4,'Line 1');
commit;
backup incremental level 1 archivelog all</code>;
Here I have not done backup database plus archivelog (assuming all those new inserts would be in redolog and may be in archivelog?). Now I delete this VM and restore from disk backups a new VM from backup1 (where only 1 table 5 records) exists and now I simply run following:
<code>shutdown abort;
startup nomount;
RESTORE CONTROLFILE FROM "/mnt/orabkup1/snapcf_ev.f";
startup mount;
run
{
SET UNTIL TIME "TO_DATE('2024-04-06 15:33:55', 'YYYY-MM-DD HH24:MI:SS')";
RESTORE ARCHIVELOG all ;
RECOVER DATABASE;
sql `ALTER DATABASE OPEN RESETLOGS?;
}
</code>
But unfortunately it complains about ORA-01194: file 1 needs more recovery to be consistent
ORA-01110: data file 1: 'D:\BASE\MYDB\DATA\SYSTEM01.DBF'
Not sure why this, as I was thinking that Arc...
Hi Team
I have below scenario.
Step#1) User clicks to particular App UI screen.
Step#2) User selects multiple filters on UI - say filter1, filter2 which correspond to table columns.
Step#3) For each filter selected by user, he needs to enter data - say Mark (for filter1), Will (for filter2) based on which search will be performed on the respective filters (aka table columns).
Step#4) User inputs from above Steps#2, 3 are passed to PLSQL API which returns desired SQL result in paginated manner (pageSize: 50).
User inputs from Step#2, 3 will be dynamic.
I have tried to implement this using native dynamic SQL, but looks like I have hit an end road here. Able to use dynamic values in "USING" clause, but not able to return the data from SELECT statement with EXECUTE IMMEDIATE.
Shared above LiveSQL link which has re-producible test case.
If I comment line "BULK COLLECT INTO l_arId, l_arName, l_arType" in the procedure, the block executes successfully.
But I need the result set from SELECT statement in procedure as output.
Looking for some advise here.
Thanks a bunch!
We have an application that has been written to insert a variable that is char(50) into a column that is defined as char(40). In Oracle 11g (I know this is very old) it would merely truncate the last 10 characters without issue. However, Oracle 19c doesn't allow this and raises an exception (which I believe should've always been the case). Where can I find documentation of this restriction and when it was changed and is there away around this other than changing the program code?
Oracle 11 truncated that extra 10 characters in the below statemt
ADBBGNX_ADDRESS_LINE_1 := agentrecord.producerrec.businessAddressLine1;
Oracle 19 throws an exception with a NULL error status.
This video shows how to locally install AnythingLLM to privately and securly and remotely run any LLM with any RAG document. It all runs locally with zero required internet connectivity.
Pages
|