Showing posts with label database. Show all posts
Showing posts with label database. Show all posts

Sunday, 22 November 2020

List the data types that are allowed for SQL attributes.

 Question: 4.2. List the data types that are allowed for SQL attributes. 

Solution:

Numeric data types

Integer numbers

  • INT
  • INTEGER
  • SMALLINT
  • BIGINT

Floating-point (real) numbers

  • REAL
  • DOUBLE
  • FLOAT

Fixed-point numbers

  • DECIMAL(n,m)
  • DEC(n,m)
  • NUMERIC(n,m)
  • NUM(n,m)

Character-string data types

Fixed length

  • CHAR(n)
  • CHARACTER(n)

Varying length

  • VARCHAR(n)
  • CHAR VARYING(n)
  • CHARACTER VARYING(n)

Long varchar

Large object data types

Characters:

CLOB

CHAR LARGE OBJECT

CHARACTER LARGE OBJECT

Bits

BLOB

BINARY LARGE OBJECT

Boolean data type

Values of TRUE or FALSE or NULL

DATE data type

Ten positions

Components are YEAR, MONTH, and DAY in the form YYYY-MM-DD

Tuesday, 31 December 2019

How do I store binary data in MySQL?

Q: How do I store binary data in MySQL?

Solution:

The basic answer is in a BLOB data type / attribute domain. BLOB is short for Binary Large Object and that column data type is specific for handling binary data.

you should be aware that there are other binary data formats:
TINYBLOB/BLOB/MEDIUMBLOB/LONGBLOB
VARBINARY
BINARY
Each has their use cases. If it is a known (short) length (e.g. packed data) often times BINARY or VARBINARY will work. They have the added benefit of being able ton index on them.


For a table like this:
CREATE TABLE binary_data (
    id INT(4) NOT NULL AUTO_INCREMENT PRIMARY KEY,
    description CHAR(50),
    bin_data LONGBLOB,
    filename CHAR(50),
    filesize CHAR(50),
    filetype CHAR(50)
);

Saturday, 9 November 2019

Setting up a database and have auto-commit turned off

Q: You are working on setting up a database and have auto-commit turned off so you can easily undo mistakes. Unfortunately, your session crashes – the database itself is fine but your remote connection failed due to network troubles. You are disconnected long enough for your mysql session to terminate.
Once you manage to reconnect and start a new session, what happens to the changes that you made since your last manual commit (or entry of a statement equivalent to a commit) and why? Select the best answer.
You may want to read up the details.


1.
Because auto-commit is turned off, you have lost all of your work since you last manually committed your work. Any changes made since then have been rolled back.
2.
You will not have lost any work, because MySQL sessions automatically continue from where they left off last time. Any uncommitted work is still not committed.
3.
Whether or not you lost work depends on the type of commands in the current transaction. Actions less likely to interfere with another transaction are more likely to have been accepted.
4.
You will not have lost any work, because the end of a session acts as a commit.


Solution:
Option 4
Because a commit or rollback command will ends the current session and starts a new one
 

Performance of the database

Q: A shop stores information on Customers(cid, name, address, last_visit), Inventory(iid, name, cost, price, stock) and Purchases( cid, iid, when, price, quantity) FK cid REF Customers(cid) ON DELETE RESTRICT ON UPDATE CASCADE, FK iid REF Inventory(iid) ON DELETE RESTRICT ON UPDATE CASCADE.
Which of the following actions are more likely than not to be true regarding the performance of the database? Select all that apply.

 
1. Using the CREATE INDEX statement to create an index to cid in Customers would make no difference at all.
2. Creating indexes on Purchases may be harmful if we are expecting very frequent INSERT operations.
3. Attribute iid of Inventory having an index is likely to assist with an INSERT operation on Customers.
4. Creating an index on the stock attribute of Inventory would be useful if changes are often due to customers purchasing items (the integer stored in stock decreases by the quantity of every purchase for the product with the iid given in the Purchases tuple), but there are few INSERT operations on the table.


Solution:
NOTE : The INSERT and UPDATE statements take more time on tables having indexes, whereas the SELECT statements become fast on those tables. The reason is that while doing insert or update, a database needs to insert or update the index values as well.
1. False
Explanation : Indexing in Customers is likely to improve its performance because it is less likely to have new customers very often so INSERT (or UPDATE) operations will be less on Customers. On the other hand, INSERT operations on Purchases is more likely and that would require lookup of cid in Customers table due to the presence of foreign key. So, indexing will decrease the lookup time in Customers table and hence, will increase the performance.
2. True
Explanation : As INSERT operations are expected to be more frequent, the indexing may slow down the operations and hence, may degrade the performance.
3. False
Explanation : INSERT operation on Customers has nothing to do with attribute iid of Inventory.
4. False
Explanation : Changes in stock attribute of Inventory due to customers purchasing items would be UPDATE operations. And UPDATE operations also take more time on tables having indexes and hence, may degrade the performance.
 

Stored procedures and functions are available in the current database

Q: Which of the following commands is the best way to generate an overview of what stored procedures and functions are available in the current database and what each does? Assume that the developer(s) of the database have followed the good coding guidelines laid out in this unit.


1.
SHOW PROCEDURES;
2.
SHOW PROCEDURE STATUS WHERE db = DATABASE();
3.
SELECT * FROM INFORMATION_SCHEMA.ROUTINES;
4.
SELECT name, comment, type FROM mysql.proc WHERE db = DATABASE();

Solution:
 
2.SHOW PROCEDURE STATUS WHERE db = DATABASE();
This is the ideal statement to list the procedures stored in the database mentioned in the search condition.
Only SHOW PROCEDURES is not a valid statement as it also needs STATUS
SELECT * FROM INFORMATION_SCHEMA.ROUTINES; this statement selects all the rows from the table ROUTINES and displays the result.
SELECT name, comment, type FROM mysql.proc WHERE db = DATABASE(); This is the selective display of certain columns from the table mysql.proc from the database db.

Monday, 17 July 2017

XSD DataSets and ignoring foreign keys

Question: I have a pretty standard table set-up in a current application using the .NET XSD DataSet and TableAdapter features. My contracts table consists of some standard contract information, with a column for the primary department. This column is a foreign key to my Departments table, where I store the basic department name, id, notes. This is all setup and functioning in my SQL Server.
When I use the XSD tool, I can drag both tables in at once and it auto detects/creates the foreign key I have between these two tables. This works great when I'm on my main page and am viewing contract data.
However, when I go to my administrative page to modify the department data. I typically do something like this:

Dim dtDepartment As New DepartmentDataTable()
Dim taDepartment As New DepartmentTableAdapter()

taDepartment.Fill(dtDepartment)

However, at this point an exception is thrown saying to the effect that there is a foreign key reference broken here, I'm guessing since I don't have the Contract DataTable filled.
How can I fix this problem? I know I can simply remove the foreign key from the XSD to make things work fine, but having the additional integrity check there and having the XSD schema match the SQL schema in the database is nice.

Solution:  You can try turning Check-constraints off on the DataSet (it's in its properties), or altering the properties of that relationship, and change the key to a simple reference - up to you.

Friday, 14 July 2017

Flat File Databases

Question: What are your best practices around creating flat file database structures in PHP? A lot of the more mature PHP flat file frameworks I see out there attempt to implement SQL-like query syntax, which is over the top for my purposes in most cases (I would just use a database at that point).
Are there any elegant tricks out there to get good performance and features with the small code overhead one would want by taking on this problem in the first place?

Solution:  Well, what is the nature of the flat databases. Are they large or small. Is it simple arrays with arrays in them? if its something simple say userprofiles built as such:

$user = array("name" => "dubayou", 
              "age" => 20,
              "websites" => array("dubayou.com","willwharton.com","codecream.com"),
              "and_one" => "more");

and to save or update the db record for that user.


$dir = "../userdata/";  //make sure to put it bellow what the server can reach.
file_put_contents($dir.$user['name'],serialize($user));

and to load the record for the user


function get_user($name){
    return unserialize(file_get_contents("../userdata/".$name));
}

Monday, 10 July 2017

Check for changes to an SQL Server table?

Question: How can I monitor an SQL Server database for changes to a table without using triggers or modifying the structure of the database in any way? My preferred programming environment is .NET and C#.
I'd like to be able to support any SQL Server 2000 SP4 or newer. My application is a bolt-on data visualization for another company's product. Our customer base is in the thousands, so I don't want to have to put in requirements that we modify the third-party vendor's table at every installation.
By "changes to a table" I mean changes to table data, not changes to table structure.
Ultimately, I would like the change to trigger an event in my application, instead of having to check for changes at an interval.

The best course of action given my requirements (no triggers or schema modification, SQL Server 2000 and 2005) seems to be to use the BINARY_CHECKSUM function in T-SQL. The way I plan to implement is this:
Every X seconds run the following query:

SELECT CHECKSUM_AGG(BINARY_CHECKSUM(*))
FROM sample_table
WITH (NOLOCK);

And compare that against the stored value. If the value has changed, go through the table row by row using the query:

SELECT row_id, BINARY_CHECKSUM(*)
FROM sample_table
WITH (NOLOCK);

And compare the returned checksums against stored values.

Solution:

 Take a look at the CHECKSUM command:

SELECT CHECKSUM_AGG(BINARY_CHECKSUM(*)) FROM sample_table WITH (NOLOCK);

That will return the same number each time it's run as long as the table contents haven't changed. 

Friday, 7 July 2017

Throw an error in a MySQL trigger

Question: If I have a trigger before the update on a table, how can I throw an error that prevents the update on that table?

Solution:

 As of MySQL 5.5, you can use the SIGNAL syntax to throw an exception:

signal sqlstate '45000' set message_text = 'My Error Message';

State 45000 is a generic state representing "unhandled user-defined exception".

Here is a more complete example of the approach:


delimiter //
use test//
create table trigger_test
(
    id int not null
)//
drop trigger if exists trg_trigger_test_ins //
create trigger trg_trigger_test_ins before insert on trigger_test
for each row
begin
    declare msg varchar(128);
    if new.id < 0 then
        set msg = concat('MyTriggerError: Trying to insert a negative value in trigger_test: ', cast(new.id as char));
        signal sqlstate '45000' set message_text = msg;
    end if;
end
//

delimiter ;
-- run the following as seperate statements:
insert into trigger_test values (1), (-1), (2); -- everything fails as one row is bad
select * from trigger_test;
insert into trigger_test values (1); -- succeeds as expected
insert into trigger_test values (-1); -- fails as expected
select * from trigger_test;

Binary Data in MySQL

Question: How do I store binary data in MySQL?

Solution:  The basic answer is in a BLOB data type / attribute domain. BLOB is short for Binary Large Object and that column data type is specific for handling binary data.