Does Your Oracle Table Really Need 255 or More Columns?

Occasionally it may be necessary to have a table with 255 or more columns (data marts and data warehouses come immediately to mind). Oracle allows up to 1023 columns in a table but once the column list expands beyond 255 some interesting behavior appears. Let’s look at a situation, using Oracle 12.2.0.1 on OEL 7, update 4, and see how things can change.

The experiment starts with a script by Jonathan Lewis (abbreviated in the output) that creates a table with 250 columns:

 
SQL> create table t1(
  2           col000,
  3           col001,
  4           col002,
  5           col003,
  6           col004,
  7           col005,
  8           col006,
  9           col007,
 10           col008,
...
211           col209,
212           col210,
213           col211,
214           col212,
215           col213,
216           col214,
217           col215,
218           col216,
219           col217,
220           col218,
221           col219,
222           col220,
223           col221,
224           col222,
225           col223,
226           col224,
227           col225,
228           col226,
229           col227,
230           col228,
231           col229,
232           col230,
233           col231,
234           col232,
235           col233,
236           col234,
237           col235,
238           col236,
239           col237,
240           col238,
241           col239,
242           col240,
243           col241,
244           col242,
245           col243,
246           col244,
247           col245,
248           col246,
249           col247,
250           col248,
251           col249
252  )
253  as
254  with generator as (
255           select
256               rownum id
257           from dual
258           connect by
259               level <= 1e3
260  )
261  select
262           lpad(000,10,'0'),
263           lpad(001,10,'0'),
264           lpad(002,10,'0'),
265           lpad(003,10,'0'),
266           lpad(004,10,'0'),
267           lpad(005,10,'0'),
268           lpad(006,10,'0'),
269           lpad(007,10,'0'),
270           lpad(008,10,'0'),
271           lpad(009,10,'0'),
272           lpad(010,10,'0'),
...
473           lpad(211,10,'0'),
474           lpad(212,10,'0'),
475           lpad(213,10,'0'),
476           lpad(214,10,'0'),
477           lpad(215,10,'0'),
478           lpad(216,10,'0'),
479           lpad(217,10,'0'),
480           lpad(218,10,'0'),
481           lpad(219,10,'0'),
482           lpad(220,10,'0'),
483           lpad(221,10,'0'),
484           lpad(222,10,'0'),
485           lpad(223,10,'0'),
486           lpad(224,10,'0'),
487           lpad(225,10,'0'),
488           lpad(226,10,'0'),
489           lpad(227,10,'0'),
490           lpad(228,10,'0'),
491           lpad(229,10,'0'),
492           lpad(230,10,'0'),
493           lpad(231,10,'0'),
494           lpad(232,10,'0'),
495           lpad(233,10,'0'),
496           lpad(234,10,'0'),
497           lpad(235,10,'0'),
498           lpad(236,10,'0'),
499           lpad(237,10,'0'),
500           lpad(238,10,'0'),
501           lpad(239,10,'0'),
502           lpad(240,10,'0'),
503           lpad(241,10,'0'),
504           lpad(242,10,'0'),
505           lpad(243,10,'0'),
506           lpad(244,10,'0'),
507           lpad(245,10,'0'),
508           lpad(246,10,'0'),
509           lpad(247,10,'0'),
510           lpad(248,10,'0'),
511           lpad(249,10,'0')
512  from
513           generator         v2
514  where
515           rownum <= 1e4
516  ;

Table created.

Nothing really unusual there, so let’s look at some selected statistics from that table creation:


SQL>
SQL> select n.name, s.value
  2  from v$sesstat s, v$statname n
  3  where n.statistic# = s.statistic#
  4  and n.name in ('db block gets','consistent gets','db block changes','redo entries','redo size',
        'undo change vector size','table scan rows gotten','table scan blocks gotten',
        'HSC Heap Segment Block Changes')
  5  and s.sid = &u_sid
  6  /

NAME                                                                  VALUE    
---------------------------------------------------------------- ----------    
db block gets                                                          4503    
consistent gets                                                        7918    
db block changes                                                       4107    
redo entries                                                           2160    
redo size                                                            581496    
undo change vector size                                              154272    
table scan rows gotten                                                16281    
table scan blocks gotten                                                315    
HSC Heap Segment Block Changes                                          509    

9 rows selected.

The redo entries will be of particular interest. The table contains 1,000 rows and generated roughly 600 K of redo, about 600 bytes per row. This is a reasonable volume of redo for this type of transaction. Let’s gather statistics and get an average row length before we alter the table and update data:


SQL>
SQL> begin
  2           dbms_stats.gather_table_stats(
  3               ownname     => user,
  4               tabname     => 'T1',
  5               method_opt  => 'for all columns size 1'
  6           );
  7  end;
  8  /

PL/SQL procedure successfully completed.

SQL>
SQL> select
  2           avg_row_len, num_rows,  blocks,
  3           num_rows / trunc(8000/avg_row_len) estimated_blocks
  4  from
  5           user_tables
  6  where
  7           table_name = 'T1'
  8  ;

AVG_ROW_LEN   NUM_ROWS     BLOCKS ESTIMATED_BLOCKS                             
----------- ---------- ---------- ----------------                             
       2750       1000        518              500                             

SQL>

Let’s add 10 columns to the table, putting the total number of columns to 60:


=================
Add a few columns
=================
SQL>
SQL> alter table t1 add(
  2           col250 varchar2(10),
  3           col251 varchar2(10),
  4           col252 varchar2(10),
  5           col253 varchar2(10),
  6           col254 varchar2(10),
  7           col255 varchar2(10),
  8           col256 varchar2(10),
  9           col257 varchar2(10),
 10           col258 varchar2(10),
 11           col259 varchar2(10)
 12  )
 13  ;

Table altered.

SQL>

Now we update the last column in each row:


SQL>
SQL> update t1 set col259 = lpad('259',10,'0');

1000 rows updated.

SQL> commit;

Commit complete.

SQL>

Here is where the interesting part comes in; querying v$sesstat after the update is complete displays some drastically different numbers:


SQL> select n.name, s.value
  2  from v$sesstat s, v$statname n
  3  where n.statistic# = s.statistic#
  4  and n.name in ('db block gets','consistent gets','db block changes','redo entries','redo size',
      'undo change vector size','table scan rows gotten','table scan blocks gotten','HSC Heap Segment Block Changes')
  5  and s.sid = &u_sid
  6  /

NAME                                                                  VALUE    
---------------------------------------------------------------- ----------    
db block gets                                                         16943    
consistent gets                                                       47180    
db block changes                                                      15966    
redo entries                                                           9140    
redo size                                                           9183028    
undo change vector size                                             4059192    
table scan rows gotten                                                53075    
table scan blocks gotten                                               2231    
HSC Heap Segment Block Changes                                         4078    

9 rows selected.

SQL>

Breaking the 255 column barrier increases the redo generated by almost 10 times the original volume; Oracle is now generating over 9000 bytes of redo per row, and all that was done was add columns to a table in excess of 255. Undo was also significantly increased once that 255 column barrier was breached. Each row is now basically two pieces, one piece of 255 columns and another piece of 5 columns. This now causes Oracle to perform an insert for each row updated (there are now, internally, twice as many rows as there were to start with because of the split) and even though inserts don’t generate a lot of redo or undo it does affect the total volume of both that were generated. Initially there were no NULL values; after adding the ten columns, nine of those columns remained NULL after the update. I suspect that this may be due to the fact that it was a column added to the end of each row that was updated.

It’s still not clear why Oracle increases the redo and undo generation so drastically where tables are created then extended past the 255 column barrier. Knowing this can occur may make it easier to track down large increases in archive log generation (more log switches due to the increased redo volume) when transaction volume tends to remain constant.

See all articles by David Fitzjarrell

David Fitzjarrell
David Fitzjarrell
David Fitzjarrell has more than 20 years of administration experience with various releases of the Oracle DBMS. He has installed the Oracle software on many platforms, including UNIX, Windows and Linux, and monitored and tuned performance in those environments. He is knowledgeable in the traditional tools for performance tuning – the Oracle Wait Interface, Statspack, event 10046 and 10053 traces, tkprof, explain plan and autotrace – and has used these to great advantage at the U.S. Postal Service, American Airlines/SABRE, ConocoPhilips and SiriusXM Radio, among others, to increase throughput and improve the quality of the production system. He has also set up scripts to regularly monitor available space and set thresholds to notify DBAs of impending space shortages before they affect the production environment. These scripts generate data which can also used to trend database growth over time, aiding in capacity planning. He has used RMAN, Streams, RAC and Data Guard in Oracle installations to ensure full recoverability and failover capabilities as well as high availability, and has configured a 'cascading' set of DR databases using the primary DR databases as the source, managing the archivelog transfers manually and montoring, through scripts, the health of these secondary DR databases. He has also used ASM, ASMM and ASSM to improve performance and manage storage and shared memory.

Get the Free Newsletter!

Subscribe to Cloud Insider for top news, trends & analysis

Latest Articles