But what happen to dependent objects ,everything will get invalidated. We've a similar situation., We delete around 3 million records from 30 million rows table everyday.
Yeah ,of course it'll recompile itself when it called next time. There is no logical column to do partition., I guess the insert into a new table will take considerable time with 27 mil records.. November 12, 2002 - am UTC wait 10 days so that you are deleting 30 million records from a 60 million record table and then this will be much more efficient. 3 million records on an indexed table will take considerable time.
This was a totally new paradigm for the application, and one that saved the entire mission-critical application.The in-place updates would not have worked with the terrabytes of data that we have in our database.In response to the Jack Silvey (from Richardson, TX ) review, where he wrote "It is the only way to fly. Murali from old_table; index new_table grant on new table add constraints on new_table etc on new_table drop table old_table rename new_table to old_table; you can do that using parallel query, with nologging on most operations generating very little redo and no undo at all -- in a fraction of the time it would take to update the data. I don't have a 100million row table to test with for you but -- the amount of work required to update 1,000,000 indexed rows is pretty large.We institued the Insert into a dummy table append with nologging, and were able to complete the "update" in under 30 minutes. Clicking "Next" does not allow me to proceed (and prompts me for an email address).