DBD::Pg must have the PG_BYTEA type specified for inserting BLOBs

RESOLVED FIXED in Bugzilla 2.20

Status

()

P1
normal
RESOLVED FIXED
14 years ago
14 years ago

People

(Reporter: mkanat, Assigned: mkanat)

Tracking

2.19.2
Bugzilla 2.20
Dependency tree / graph
Bug Flags:
approval +

Details

Attachments

(1 attachment, 1 obsolete attachment)

v2
3.39 KB, patch
Tomas.Kopal
: review+
Details | Diff | Splinter Review
(Assignee)

Description

14 years ago
You have to do:

$sth->bind_param(1, undef, { pg_type => DBD::Pg::PG_BYTEA });

before you insert anything into a BLOB using DBD::Pg.
(Assignee)

Updated

14 years ago
Status: NEW → ASSIGNED
Priority: -- → P1
Target Milestone: --- → Bugzilla 2.20
(Assignee)

Comment 1

14 years ago
Posted patch Cross-DB Solution (obsolete) — Splinter Review
Attachment #177135 - Flags: review?(Tomas.Kopal)
(Assignee)

Updated

14 years ago
Blocks: 285113

Comment 2

14 years ago
Comment on attachment 177135 [details] [diff] [review]
Cross-DB Solution

>Index: attachment.cgi
>@@ -921,8 +920,15 @@
>   my $sql_timestamp = SqlQuote($timestamp); 
> 
>   # Insert the attachment into the database.
>-  SendSQL("INSERT INTO attachments (bug_id, creation_ts, filename, description, mimetype, ispatch, isprivate, submitter_id, thedata) 
>-           VALUES ($::FORM{'bugid'}, $sql_timestamp, $filename, $description, $contenttype, $::FORM{'ispatch'}, $isprivate, $::userid, $thedata)");
>+  my $sth = $dbh->prepare("INSERT INTO attachments
>+      (thedata, bug_id, creation_ts, filename, description,
>+       mimetype, ispatch, isprivate, submitter_id) 
>+      VALUES (?, $::FORM{'bugid'}, $sql_timestamp, $filename, 
>+              $description, $contenttype, $::FORM{'ispatch'},
>+              $isprivate, $::userid)");
>+  trick_taint($data);

This needs a good comment. trick_taint is a dangerous beast and there needs to
be explanation of the use.
Also, I am not sure what the regex in the trick_taint function will do with
binary data (possibly quite long), but I hope it can deal with it.

>Index: Bugzilla/DB.pm
>@@ -619,6 +768,11 @@
> 
> =over 4
> 
>+=item C<BLOB_TYPE>
>+
>+The C<\%attr> argument that must be passed to bind_param in order to 
>+correctly escape a C<LONGBLOB> type.
>+

If this is abstract, you should add it to the list of abstract properties to
check. If not, there should be default in DB.pm.

>Index: Bugzilla/DB/Mysql.pm
>+use constant BLOB_TYPE => DBI::SQL_BLOB;

>Index: Bugzilla/DB/Pg.pm
>+use constant BLOB_TYPE => { pg_type => DBD::Pg::PG_BYTEA };

Why are these two different, one hash and the other just constant from DBI?

Also, don't you need to use something like LongReadLen to set the data size for
large data manipulation? I haven't used DBI for BLOBS before but documentation
for this function says that it should be used...
Attachment #177135 - Flags: review?(Tomas.Kopal) → review-
(Assignee)

Comment 3

14 years ago
> >+  trick_taint($data);
> 
> This needs a good comment. trick_taint is a dangerous beast and there needs to
> be explanation of the use.

  OK.

> Also, I am not sure what the regex in the trick_taint function will do with
> binary data (possibly quite long), but I hope it can deal with it.

  Whatever SqlQuote was doing with it, before. Same thing.

> If this is abstract, you should add it to the list of abstract properties to
> check. If not, there should be default in DB.pm.

  Hrm... well, technically, the default is SQL_BLOB. But very few DBDs implement
that, I think. :-) It should go as the default, though. :-)

> >Index: Bugzilla/DB/Pg.pm
> >+use constant BLOB_TYPE => { pg_type => DBD::Pg::PG_BYTEA };
> 
> Why are these two different, one hash and the other just constant from DBI?

  Because they are defined just as "the argument that goes to bind_param". Note
that the Pg one has to use a special pg_type thing. I could do the same thing
for the MySQL one, using the DBI standard, and make it { TYPE => SQL_BLOB }, but
I don't feel compelled to do so.

> Also, don't you need to use something like LongReadLen to set the data size 
> for large data manipulation? I haven't used DBI for BLOBS before but 
> documentation for this function says that it should be used...

  I think maybe you're thinking of Large Objects, which are something different.
This is the same thing that we do now for MySQL, just put into a placeholder. If
I see any actual problems with it during testing or in the freeze, I can fix it.
(Assignee)

Comment 4

14 years ago
Posted patch v2Splinter Review
Addressed comments.

I think we're safe for now on LongReadLen. LongTruncOk is false by default, so
we'll see Pg throw an error if we fail reading the blob.
Attachment #177135 - Attachment is obsolete: true
Attachment #177717 - Flags: review?(Tomas.Kopal)
attachment.cgi is the one place I've always expected we'd wind up with
database-specific code outside of the drivers...  I've never seen any two
databases handle binary data the same way.

Updated

14 years ago
Attachment #177717 - Flags: review?(Tomas.Kopal) → review+
(Assignee)

Updated

14 years ago
Flags: approval?
Flags: approval? → approval+
(Assignee)

Comment 6

14 years ago
Checking in attachment.cgi;
/cvsroot/mozilla/webtools/bugzilla/attachment.cgi,v  <--  attachment.cgi
new revision: 1.77; previous revision: 1.76
done
Checking in Bugzilla/DB.pm;
/cvsroot/mozilla/webtools/bugzilla/Bugzilla/DB.pm,v  <--  DB.pm
new revision: 1.38; previous revision: 1.37
done
Checking in Bugzilla/DB/Pg.pm;
/cvsroot/mozilla/webtools/bugzilla/Bugzilla/DB/Pg.pm,v  <--  Pg.pm
new revision: 1.7; previous revision: 1.6
done
Status: ASSIGNED → RESOLVED
Last Resolved: 14 years ago
Resolution: --- → FIXED
You need to log in before you can comment on or make changes to this bug.