Postgres "out of memory for query result" error moving bytea

More
2 years 10 months ago #12647 by gerber
A jdbc connection with its Fetch Size set to 50 will move this specific table in about 5 minutes with no errors. I've set the value in the Extended Properties for "Fetch Rows" to 50 for this pg oledb connection to match that setting and it has no effect on the outcome, so I must assume that there is an issue with the Fetch Rows parameter in the oledb driver since that setting does not appear to do anything.

This is the SSIS "ConnectionString" after setting "Fetch Rows" to 50:
Data Source=pgserver;User ID=pguser;Initial Catalog=pgdatabase;Provider=PGNP.1;Persist Security Info=True;Extended Properties="PORT=5433;MIN_FETCH_ROWS=50;SSL=allow;";

Please Log in or Create an account to join the conversation.

More
2 years 10 months ago #12656 by Moderator
Thank you for the very detailed info! We are working on the issue, and we'll update the thread when the fix is ready.

Please Log in or Create an account to join the conversation.

More
2 years 9 months ago #12711 by Moderator
We worked on this issue last week, and are still working on it. We have not been able to reproduce it yet. Would you send us a sample record, or size of the blob, that can help reproduce the issue?

We are currently experimenting with various bytea sizes.

Please Log in or Create an account to join the conversation.

More
2 years 9 months ago #12713 by Moderator
We were able to reproduce the out-of-memory issue. Thanks to all users who sent additional details!
It happens when blob(s) sizes are about 1GB, and the query is executed from a 32-bit application.

Following stored procedure can be used to insert large blob:
create or replace function bytea_import(p_path text, p_result out bytea) 
                   language plpgsql as $$
declare
  l_oid oid;
  r record;
begin
  p_result := '';
  select lo_import(p_path) into l_oid;
  for r in ( select data 
             from pg_largeobject 
             where loid = l_oid 
             order by pageno ) loop
    p_result = p_result || r.data;
  end loop;
  perform lo_unlink(l_oid);
end;$$;

And example of use:
CREATE TABLE test_blob
(
id integer NOT NULL,
type character varying(100) NOT NULL,
data bytea NOT NULL,
CONSTRAINT pk_test_blob PRIMARY KEY (id, type)
);

INSERT INTO pgoledb_test_blob
SELECT 1,'huge.iso',bytea_import('D:\\DependentInstalls\\huge.iso');

SELECT * FROM test_blob;

Please Log in or Create an account to join the conversation.

More
2 years 8 months ago #12813 by gerber
Thanks for all the "pre-sale" assistance so we could fully evaluate the product. We have confirmed that with pgoledb, we can now move very large bytea (blob) objects as well as tables with hundreds of millions of rows from PostgreSQL into SQL Server using SSIS 2012 (a.k.a. SQL Server Data Tools plus Business Intelligence for VS2012) in a fully 64-bit environment. Total size of our DB for this test was about 50GB spread out over about 200 tables. Thanks again!

Please Log in or Create an account to join the conversation.

More
2 years 8 months ago #12814 by Moderator
Thank you for the update! Let me finalize this thread with additional clarification. The out-of-memory issue when handling >1GB blobs in 32-applications is caused by the memory model limitation. We recommend switching to 64-bit memory model (by using the 64-bit provider version) for the very large blobs processing.

Please Log in or Create an account to join the conversation.

Time to create page: 0.138 seconds
Powered by Kunena Forum