Yahoo! Finance CSV file will not return Dow Jones (^DJI)
Replace ^DJI with INDU (that’s one of the tickers for the Dow) – that will work. No idea why ^DJI stopped working last weekend – someone has made a ‘negative enhancement’. Cheerio GT
Replace ^DJI with INDU (that’s one of the tickers for the Dow) – that will work. No idea why ^DJI stopped working last weekend – someone has made a ‘negative enhancement’. Cheerio GT
gawk -vFPAT='[^,]*|”[^”]*”‘ ‘{print $1 “,” $3}’ | sort | uniq This is an awesome GNU Awk 4 extension, where you define a field pattern instead of a field-separator pattern. Does wonders for CSV. (docs) ETA (thanks mitchus): To remove the surrounding quotes, gsub(“^\”|\”$”,””,$3); if there’s more fields than just $3 to process that way, just … Read more
I noticed that your problematic line has escaping that uses double quotes themselves: “32 XIY “”W”” JK, RE LK” which should be interpreter just as 32 XIY “W” JK, RE LK As described in RFC-4180, page 2 – If double-quotes are used to enclose fields, then a double-quote appearing inside a field must be escaped … Read more
Maybe this will clear things up. (Or confuse you more, ) Const ForReading = 1 Const ForWriting = 2 sFolder = “H:\Letter Display\Letters\” Set oFSO = CreateObject(“Scripting.FileSystemObject”) For Each oFile In oFSO.GetFolder(sFolder).Files If UCase(oFSO.GetExtensionName(oFile.Name)) = “LTR” Then ProcessFiles oFSO, oFile End if Next Set oFSO = Nothing Sub ProcessFiles(FSO, File) Set oFile2 = FSO.OpenTextFile(File.path, ForReading) … Read more
or use this hive -e ‘select * from your_Table’ | sed ‘s/[\t]/,/g’ > /home/yourfile.csv You can also specify property set hive.cli.print.header=true before the SELECT to ensure that header along with data is created and copied to file. For example: hive -e ‘set hive.cli.print.header=true; select * from your_Table’ | sed ‘s/[\t]/,/g’ > /home/yourfile.csv If you don’t … Read more
It’s not possible using standard spark library, but you can use Hadoop API for managing filesystem – save output in temporary directory and then move file to the requested path. For example (in pyspark): df.coalesce(1) \ .write.format(“com.databricks.spark.csv”) \ .option(“header”, “true”) \ .save(“mydata.csv-temp”) from py4j.java_gateway import java_import java_import(spark._jvm, ‘org.apache.hadoop.fs.Path’) fs = spark._jvm.org.apache.hadoop.fs.FileSystem.get(spark._jsc.hadoopConfiguration()) file = fs.globStatus(sc._jvm.Path(‘mydata.csv-temp/part*’))[0].getPath().getName() fs.rename(sc._jvm.Path(‘mydata.csv-temp/’ … Read more
RFC 7111 There is an RFC which covers it and says to use text/csv. This RFC updates RFC 4180. Excel Recently I discovered an explicit mimetype for Excel application/vnd.ms-excel. It was registered with IANA in ’96. Note the concerns raised about being at the mercy of the sender and having your machine violated. Media Type: … Read more
Try calling read_csv with encoding=’latin1′, encoding=’iso-8859-1′ or encoding=’cp1252′ (these are some of the various encodings found on Windows).
Yes, you need to wrap in quotes: “some value over two lines”,some other value From this document, which is the generally-accepted CSV standard: Fields containing line breaks (CRLF), double quotes, and commas should be enclosed in double-quotes
d3.csv is an asynchronous method. This means that code inside the callback function is run when the data is loaded, but code after and outside the callback function will be run immediately after the request is made, when the data is not yet available. In other words: first(); d3.csv(“path/to/file.csv”, function(rows) { third(); }); second(); If … Read more