Select first row in each GROUP BY group?

DISTINCT ON is typically simplest and fastest for this in PostgreSQL. (For performance optimization for certain workloads see below.) SELECT DISTINCT ON (customer) id, customer, total FROM purchases ORDER BY customer, total DESC, id; Or shorter (if not as clear) with ordinal numbers of output columns: SELECT DISTINCT ON (2) id, customer, total FROM purchases … Read more

Joining tables on columns with comma separated values

You can simplify your join condition and you need a string_agg() to get the comma separated list of author names: select string_agg(author_name,’,’), count(*) from mas_book_author b join mas_bk_accession_entry e on b.author_id = any(string_to_array(author_ids,’,’)::int[]) where e.author_ids=”1,5″; Online example: http://rextester.com/NVNBH72654 But you should really fix your data model. Storing comma separated values like the author_ids column is … Read more

Row Offset in SQL Server

I would avoid using SELECT *. Specify columns you actually want even though it may be all of them. SQL Server 2005+ SELECT col1, col2 FROM ( SELECT col1, col2, ROW_NUMBER() OVER (ORDER BY ID) AS RowNum FROM MyTable ) AS MyDerivedTable WHERE MyDerivedTable.RowNum BETWEEN @startRow AND @endRow SQL Server 2000 Efficiently Paging Through Large … Read more

Oracle Sql Statement for unique timestamp for each row

The following UPDATE statement will guarantee that each row has a unique MY_TIMESTAMP value, by increasing the milliseconds by the rownum value. EDIT: After Alessandro Rossi pointed out that there could be duplicate values, the following query has been modified to use SYSTIMESTAMP for the update. UPDATE ITEM_HISTORY SET my_timestamp = SYSTIMESTAMP + NUMTODSINTERVAL(rownum/1000, ‘SECOND’); … Read more

Hata!: SQLSTATE[HY000] [1045] Access denied for user 'divattrend_liink'@'localhost' (using password: YES)