Table compression [message #657140] |
Fri, 28 October 2016 08:29 |
srinivas.k2005
Messages: 404 Registered: August 2006
|
Senior Member |
|
|
Hello,
Currently, we are planning to reduce the tables size as much as possible to improve the queries performance and I have come across one such concept such as "Table compression".
The code is below.
CREATE TABLE sales
(saleskey number,
quarter number,
product number,
salesperson number,
amount number(12, 2),
region varchar2(10)) COMPRESS
Please suggest if this "COMPRESS" really helps!! Also, please suggest on any limitations/disadvantages of using the same.
Regards,
SRK
|
|
|
|
Re: Table compression [message #657143 is a reply to message #657141] |
Fri, 28 October 2016 08:46 |
srinivas.k2005
Messages: 404 Registered: August 2006
|
Senior Member |
|
|
Let's consider we should improve the performance of the table instead of queries. We have done below steps.
1. Rebuild indexes
2. Analyze the table
Adding to below, Can we run below commands for improving the table performance? Does it impact anything? Please advise.
SQL> alter table mytable enable row movement;
Table altered
SQL> alter table mytable shrink compact;
Table altered
|
|
|
Re: Table compression [message #657145 is a reply to message #657143] |
Fri, 28 October 2016 08:52 |
Roachcoach
Messages: 1576 Registered: May 2010 Location: UK
|
Senior Member |
|
|
Can you explain why you believe the "table performance" should or if it even can be improved. What does that even mean? Do you have any investigative evidence of basis that "table performance" is subpar?
|
|
|
|
Re: Table compression [message #657147 is a reply to message #657145] |
Fri, 28 October 2016 09:05 |
srinivas.k2005
Messages: 404 Registered: August 2006
|
Senior Member |
|
|
Currently, we have few tables, each one is having millions of records.
We have lot of queries which are referring to these tables and already we have done performance tuning of queries ( explain plan, checking indexes,joins, access paths etc...).
Also, checked testing with hints to queries and found no improvements (Although removed hints from queries later after testing).
We found small improvements after rebuilding indexes. Is there any similar methods? SO I am asking if below ones would improve performance.
1. alter table mytable enable row movement;
alter table mytable shrink compact;
2. table compression
Regards,
SRK
|
|
|
|
|
Re: Table compression [message #657156 is a reply to message #657153] |
Fri, 28 October 2016 09:54 |
Roachcoach
Messages: 1576 Registered: May 2010 Location: UK
|
Senior Member |
|
|
One of the interesting foibles of compression is that it doesn't work as most people assume. Unless the data is RO you're going to get some weird results. Which was why I asked my original questions
It's actually a really detailed analysis piece with consideration existing of the existing bottlenecks needed to work out if it will help performance.
[Updated on: Fri, 28 October 2016 09:59] Report message to a moderator
|
|
|
Re: Table compression [message #657179 is a reply to message #657147] |
Fri, 28 October 2016 13:44 |
John Watson
Messages: 8938 Registered: January 2010 Location: Global Village
|
Senior Member |
|
|
srinivas.k2005 wrote on Fri, 28 October 2016 15:05Currently, we have few tables, each one is having millions of records.
We have lot of queries which are referring to these tables and already we have done performance tuning of queries ( explain plan, checking indexes,joins, access paths etc...).
Also, checked testing with hints to queries and found no improvements (Although removed hints from queries later after testing).
We found small improvements after rebuilding indexes. Is there any similar methods? SO I am asking if below ones would improve performance.
1. alter table mytable enable row movement;
alter table mytable shrink compact;
2. table compression
Regards,
SRK
It is possible that compression can improve the performance of table full scan operations. It cannot improve the performance of indexed access paths. Are you already getting scan access, or is that something you want to move towards?
|
|
|
Re: Table compression [message #657209 is a reply to message #657179] |
Tue, 01 November 2016 11:11 |
Bill B
Messages: 1971 Registered: December 2004
|
Senior Member |
|
|
A slow query is almost always caused by stale statistics and lack of correct indexes. look at the explain plan when you run a query to find the bottlenecks. How many records are in the table?, how long is your query running? I have tables with hundreds of millions of records and my queries against the table may take less then a second. How often do you refresh your statistics? Remember if you compress your table the database has to uncompress the data to use it. In most cases the query results would be slower with compression, not faster.
[Updated on: Tue, 01 November 2016 11:12] Report message to a moderator
|
|
|
Re: Table compression [message #657210 is a reply to message #657209] |
Tue, 01 November 2016 13:11 |
pablolee
Messages: 2882 Registered: May 2007 Location: Scotland
|
Senior Member |
|
|
Quote: In most cases the query results would be slower with compression, not faster. Do you have any evidence of this? This paper would appear to disagree.
Quote:One
significant advantage is Oracle's ability to read compressed blocks directly without having to first uncompress the block. Therefore, there is no measurable performance degradation for accessing compressed data
[Updated on: Tue, 01 November 2016 14:12] Report message to a moderator
|
|
|
|
Re: Table compression [message #657213 is a reply to message #657211] |
Tue, 01 November 2016 14:20 |
Solomon Yakobson
Messages: 3275 Registered: January 2010 Location: Connecticut, USA
|
Senior Member |
|
|
pablolee wrote on Tue, 01 November 2016 15:08One other thing to consider, do you have the appropriate license to actually use Advanced compression?
I didn't see OP using advanced compression. COMPRESS defaults to BASIC.
SY.
|
|
|
|
Re: Table compression [message #657215 is a reply to message #657209] |
Tue, 01 November 2016 15:28 |
John Watson
Messages: 8938 Registered: January 2010 Location: Global Village
|
Senior Member |
|
|
Scan operations will usually be faster, simply because the segment has fewer blocks. The decompression should not take any time, because (unless you are using HCC) the "compression" isn't compression at all: it is de-duplication, replacing repeated occurrences of a string with a token. There is no reason why extracting a a row from a block formatted like that would be slower than if the string were repeated.
Even HCC, using real compression algorithms, is usually faster because you can often get 20 or 40 to 1 compression ratios. Think what that does for reducing the IO requirement.
|
|
|