2012年10月29日 星期一
[SQLite] 計算天數
SELECT julianday('now') - julianday('2012-07-10') + 1
"112.61215673619881"
很麻煩的 SQLite。
2012年10月25日 星期四
SQLite - CASE WHEN
Example:
計算 2330(台積電) 年化ROE:
select activity_date,
case
when strftime('%m', activity_date) = '03' then roe * 4/1
when strftime('%m', activity_date) = '06' then roe * 4/2
when strftime('%m', activity_date) = '09' then roe * 4/3
else roe
end as annual_adjusted_roe
from
(
select activity_date, roe, max(report_date) from
(
select
E.activity_date,
I.number / E.number as roe,
E.report_date
from BalanceSheet as E
inner join
IncomeStmt as I
on E.stock_code = I.stock_code
and E.activity_date = I.activity_date
and E.item = '股東權益總計'
and I.item = '合併總損益'
and E.report_type = 'C'
and I.report_type = 'C'
and E.stock_code = '2330'
)
where roe is not null
group by activity_date
order by activity_date
)
Result:
"2004-12-01","0.23137723860560547"
"2005-06-01","0.18294564717444345"
"2005-12-01","0.20982641425180892"
"2006-06-01","0.29793076021065906"
"2006-12-01","0.2498246389394268"
"2007-03-01","0.14263155367277086"
"2007-06-01","0.18819824997708876"
"2007-09-01","0.19872114434046592"
"2007-12-01","0.22403837915176847"
"2008-03-01","0.22017673663001705"
"2008-06-01","0.25570481677155477"
"2008-09-01","0.2515449671927223"
"2008-12-01","0.20926102952524128"
"2009-03-01","0.012261980810645017"
"2009-06-01","0.11859167096946371"
"2009-09-01","0.1617447509053049"
"2009-12-01","0.1792735864247019"
"2010-03-01","0.2541800955198025"
"2010-06-01","0.29903653176286066"
"2010-09-01","0.29928906353328644"
"2010-12-01","0.28042283521239136"
"2011-03-01","0.23793318292084237"
"2011-06-01","0.2549615622488371"
"2011-09-01","0.22821454301634214"
"2011-12-01","0.21272784096487046"
"2012-03-01","0.2011224622441732"
"2012-06-01","0.23849902699697093"
計算 2330(台積電) 年化ROE:
select activity_date,
case
when strftime('%m', activity_date) = '03' then roe * 4/1
when strftime('%m', activity_date) = '06' then roe * 4/2
when strftime('%m', activity_date) = '09' then roe * 4/3
else roe
end as annual_adjusted_roe
from
(
select activity_date, roe, max(report_date) from
(
select
E.activity_date,
I.number / E.number as roe,
E.report_date
from BalanceSheet as E
inner join
IncomeStmt as I
on E.stock_code = I.stock_code
and E.activity_date = I.activity_date
and E.item = '股東權益總計'
and I.item = '合併總損益'
and E.report_type = 'C'
and I.report_type = 'C'
and E.stock_code = '2330'
)
where roe is not null
group by activity_date
order by activity_date
)
Result:
"2004-12-01","0.23137723860560547"
"2005-06-01","0.18294564717444345"
"2005-12-01","0.20982641425180892"
"2006-06-01","0.29793076021065906"
"2006-12-01","0.2498246389394268"
"2007-03-01","0.14263155367277086"
"2007-06-01","0.18819824997708876"
"2007-09-01","0.19872114434046592"
"2007-12-01","0.22403837915176847"
"2008-03-01","0.22017673663001705"
"2008-06-01","0.25570481677155477"
"2008-09-01","0.2515449671927223"
"2008-12-01","0.20926102952524128"
"2009-03-01","0.012261980810645017"
"2009-06-01","0.11859167096946371"
"2009-09-01","0.1617447509053049"
"2009-12-01","0.1792735864247019"
"2010-03-01","0.2541800955198025"
"2010-06-01","0.29903653176286066"
"2010-09-01","0.29928906353328644"
"2010-12-01","0.28042283521239136"
"2011-03-01","0.23793318292084237"
"2011-06-01","0.2549615622488371"
"2011-09-01","0.22821454301634214"
"2011-12-01","0.21272784096487046"
"2012-03-01","0.2011224622441732"
"2012-06-01","0.23849902699697093"
SQLite 更改日期,把日期改為當月第一天
Example:
就這樣,不太好想像。
- update CashFlowStmt set report_date = strftime('%Y-%m', report_date) || '-01'
- update CashFlowStmt set activity_date = strftime('%Y-%m', activity_date) || '-01'
就這樣,不太好想像。
2012年10月23日 星期二
六十公里小霸尖山
日期:2012-10-18 ~ 21。
D0 (2012-10-18)
1930 台電大樓捷運站出發。坐建至的車到觀霧。
D1 (2012-10-19)
大鹿林道 => 大霸尖山登山口 => 九九山莊
睡醒。眼鏡螺絲鬆掉,右眼鏡片噴出。耐心找小螺絲,幸好有好心山友借我螺絲起子,心中大石頭壓在心上,很擔心右腳傷勢,看來是非走不可了。為此,特地準備護膝一枚,全程使用。效果相當不錯。不能用太多右腳,只好苦左腳。左腳並非我的慣用腳,若長時間使用左腳,很容易造成左腳小腿痠痛,甚至抽筋。
整段路都在想怎麼走最經濟,感受每一塊肌肉跟骨頭,腳掌哪些指頭出力,哪些指頭穩定方向,小腿肚如何傳遞力量,以前從沒如此用心,現在老了,得自觀自己的身體。此外,自己還有一些心理壓力,害怕體力跟不上 ((的確跟不上)),背包太重,或是不小心滑一下。但往正面看,自己正呼吸著,走在大鹿林道曬太陽,這樣就很幸福了。
大鹿林道很長,不過比起730林道,我寧可走大鹿林道。許多事是比較出來的,許多事是體會出來的,紀錄縱使寫的多完善,照片就算照的多美麗,還是得靠自己身體慢慢去體會,這是生命有趣的地方。芒花開得漂亮,為什麼草嶺古道開不出來?在此略表不提。
太久沒爬山,體力掉真快。走走停停,終於走到 16.7k 水源處,大家在此用午餐。還是很納悶身體為何如此沒力,起初是肩膀痠痛,接著是腰部痠痛,但失去的就失去了,就跟股票一樣,你在 A股票賠了一屁股,就別想從 A股票再賺回來。賠錢是賠錢,賺錢是賺錢,兩件事不能混為一談。一樣道理,當初爬山養出來的體力沒了就沒了,那我就把大鹿林道當做另一個起點,重新培養體力就好了。
大鹿林道就算再長,也有走到大霸登山口的時候。
H1800M 到 H2700M,爬升 900M/4KM,每公里爬升 225M,不算非常陡,那是曾經。現在可苦爆了。喵的。從 3K開始左腳開始抽筋,跟第一次爬北插天山很像,痛到直接跌趴在山路,不誇張,在寬闊處直接趴下去,果然,單靠左腳還是太撐,只好用右腳掌力把自己往前推。夕陽伴我走到九九山莊,超感動,感動快爆盤了。
晚餐隨意吃。
夜晚看星星。星星在天空發抖,抖呀抖,同一星座用自己的頻率共振。這應該是幻覺吧。晚上七點躺平,瞬間就睡著了。
D2 (2012-10-20)
九九山莊 => 3050高地 => 伊澤山登山口 => 中霸山屋 => 中霸坪 => 大霸尖山霸基 => 小霸尖山 => 大霸尖山霸基 => 中霸坪 => 中霸山屋 => 伊澤山登山口 => 伊澤山 => 伊澤山登山口 => 3050高地 => 加利山 => 3050高地 => 九九山莊
早上三點起床,四點多摸早黑開始走,沒想到又摸晚黑回九九山莊。小腿小痠,領隊速度忽快忽慢,卻也推到 3050高地,此時早已日出東方,大霸影子曬在鹿場大山身軀,可愛極了。心中又想,等等繞到小霸尖山,下午再繞回來,會不會無力上去加利山?沒錯,下午的確蠻崩潰的。
路過耶巴奧山,八小巒,山頂矮小,坡勢緩和,容易攀登,縱走順路兼登。包括:巴巴山、石門山、雪山東峰、耶巴奧山、甘藷峰、新仙山、僕落西擴山、烏可冬克山。
超人狀況不太好。
猶豫。慢慢走到中霸山屋,跟大家交代狀況。好心的高大哥折回找超人,不見人影,再折回來。後來超人慢慢走到大霸尖山霸基。大家繼續往中霸坪走,路不算困難,但想到中霸坪到霸基的長下坡,擔心回程走不上去。
過程身體很累、很渴、不舒服,照相也亂七八糟,不管怎樣,身體能動就可以了。
霸基附近賴了許久,繼續往小霸尖山前進。這段路總算有刺激的感覺,非來不可。
這段雖然驚悚,但今天天氣很好,不用擔心細雨過後土砂不穩滑腳,只有一段拉繩處,其實安全的很,比品田V斷還要輕鬆寫意。只是爬坡還是很淫蕩,害我喘到不行。從這角度細看,很像比讚的大拇指喔。
(http://www.spnp.gov.tw/Article.aspx?a=UnxYM%2frAs3Q%3d&lang=1)
東霸尖山好誘人哩。先放著。大休後慢慢走回中霸山屋用午餐。這回換霸基到中霸坪很淫蕩,無法感受腳跟身體連在一起,特別是山上天氣好到不行,曬死我也,行進水一點一滴流逝,不得已跟丸子要水,有夠渴,又累,另一隊元智大學登山隊體力好棒,走好快。
下午往後面退。
伊澤山在路徑旁邊,不曉得站上去風景怎樣。
隨著時間漸漸流逝,小腿肚,我的小腿肚,從痠痛到麻痺,麻痺到僵硬,再從僵硬到軟趴。今天肯定會睡很飽。回去 3050高地還有一段很淫蕩的上坡,氣死。一度考慮放棄加利山,在 3050高地睡大覺,不過既然都來了,就讓小腿痛吧。
加利山路程不遠,也有下山捷徑,不過莊主警告我們曾有人在捷徑迷路失蹤,冒險固然刺激,安全也很重要,如果留守人是喜歡的人,怎麼能讓他擔心呢?雖然註定摸晚黑,不過就是晚黑,也不必刻意加快腳步,傍晚腦電波太低,走快反而容易有意外。
反正摸黑都是這套說詞。
大約六點多回九九山莊,整個被擊垮,超累,腳不聽使喚。
D3 (2012-10-21)
九九山莊 => 大霸登山口 => 大鹿林道
最後一天,大家感情特別好,認識建至大哥跟高大哥夫妻倆,很趣味。其餘老屁股略表不提。今天沒有摸黑壓力,只要拖著殘障的小腿肚往下走就可以了。
D0 (2012-10-18)
1930 台電大樓捷運站出發。坐建至的車到觀霧。
D1 (2012-10-19)
大鹿林道 => 大霸尖山登山口 => 九九山莊
睡醒。眼鏡螺絲鬆掉,右眼鏡片噴出。耐心找小螺絲,幸好有好心山友借我螺絲起子,心中大石頭壓在心上,很擔心右腳傷勢,看來是非走不可了。為此,特地準備護膝一枚,全程使用。效果相當不錯。不能用太多右腳,只好苦左腳。左腳並非我的慣用腳,若長時間使用左腳,很容易造成左腳小腿痠痛,甚至抽筋。
讓我想起郡大山看玉山的山形
整段路都在想怎麼走最經濟,感受每一塊肌肉跟骨頭,腳掌哪些指頭出力,哪些指頭穩定方向,小腿肚如何傳遞力量,以前從沒如此用心,現在老了,得自觀自己的身體。此外,自己還有一些心理壓力,害怕體力跟不上 ((的確跟不上)),背包太重,或是不小心滑一下。但往正面看,自己正呼吸著,走在大鹿林道曬太陽,這樣就很幸福了。
大鹿林道很長,不過比起730林道,我寧可走大鹿林道。許多事是比較出來的,許多事是體會出來的,紀錄縱使寫的多完善,照片就算照的多美麗,還是得靠自己身體慢慢去體會,這是生命有趣的地方。芒花開得漂亮,為什麼草嶺古道開不出來?在此略表不提。
太久沒爬山,體力掉真快。走走停停,終於走到 16.7k 水源處,大家在此用午餐。還是很納悶身體為何如此沒力,起初是肩膀痠痛,接著是腰部痠痛,但失去的就失去了,就跟股票一樣,你在 A股票賠了一屁股,就別想從 A股票再賺回來。賠錢是賠錢,賺錢是賺錢,兩件事不能混為一談。一樣道理,當初爬山養出來的體力沒了就沒了,那我就把大鹿林道當做另一個起點,重新培養體力就好了。
大鹿林道就算再長,也有走到大霸登山口的時候。
H1800M 到 H2700M,爬升 900M/4KM,每公里爬升 225M,不算非常陡,那是曾經。現在可苦爆了。喵的。從 3K開始左腳開始抽筋,跟第一次爬北插天山很像,痛到直接跌趴在山路,不誇張,在寬闊處直接趴下去,果然,單靠左腳還是太撐,只好用右腳掌力把自己往前推。夕陽伴我走到九九山莊,超感動,感動快爆盤了。
PM5:00
晚餐隨意吃。
夜晚看星星。星星在天空發抖,抖呀抖,同一星座用自己的頻率共振。這應該是幻覺吧。晚上七點躺平,瞬間就睡著了。
D2 (2012-10-20)
九九山莊 => 3050高地 => 伊澤山登山口 => 中霸山屋 => 中霸坪 => 大霸尖山霸基 => 小霸尖山 => 大霸尖山霸基 => 中霸坪 => 中霸山屋 => 伊澤山登山口 => 伊澤山 => 伊澤山登山口 => 3050高地 => 加利山 => 3050高地 => 九九山莊
早上三點起床,四點多摸早黑開始走,沒想到又摸晚黑回九九山莊。小腿小痠,領隊速度忽快忽慢,卻也推到 3050高地,此時早已日出東方,大霸影子曬在鹿場大山身軀,可愛極了。心中又想,等等繞到小霸尖山,下午再繞回來,會不會無力上去加利山?沒錯,下午的確蠻崩潰的。
路過耶巴奧山,八小巒,山頂矮小,坡勢緩和,容易攀登,縱走順路兼登。包括:巴巴山、石門山、雪山東峰、耶巴奧山、甘藷峰、新仙山、僕落西擴山、烏可冬克山。
超人狀況不太好。
猶豫。慢慢走到中霸山屋,跟大家交代狀況。好心的高大哥折回找超人,不見人影,再折回來。後來超人慢慢走到大霸尖山霸基。大家繼續往中霸坪走,路不算困難,但想到中霸坪到霸基的長下坡,擔心回程走不上去。
過程身體很累、很渴、不舒服,照相也亂七八糟,不管怎樣,身體能動就可以了。
好遠的路,這種感覺真洩氣。
終於見到影子了!先下去再說,
回程再來煩惱爬上來的問題
霸基附近賴了許久,繼續往小霸尖山前進。這段路總算有刺激的感覺,非來不可。
很喜歡黑色馬鞍地形,
小霸尖山還是遠遠看比較驚悚,
上去反而不會怕怕。
這段雖然驚悚,但今天天氣很好,不用擔心細雨過後土砂不穩滑腳,只有一段拉繩處,其實安全的很,比品田V斷還要輕鬆寫意。只是爬坡還是很淫蕩,害我喘到不行。從這角度細看,很像比讚的大拇指喔。
大霸群峰位於雪山彙的北部,是雪山山脈主脊從布秀蘭山分岔伸出的大支脈,這條峻秀的山脊以大霸尖山為軸點,各向東、北、西三面分出長短不一 的四條之稜。東邊是巉巖危崖、五座連峰的東霸尖山稜線;正西是巑屼峭聳、孤峰稱奇的小霸尖山短稜;北邊分出兩稜,綿延北伸的馬洋山支脈,幅園散佈於塔克金溪與薩克亞金溪之間,成為新竹線尖石鄉的主要山地,孕育著鎮西堡、新光、泰崗…等泰雅原住子民;西北伸出再轉折西延的是伊澤山支脈,也是大霸群山的主稜脈,擁有迤邐5公里長的三千公尺高嶺。
(http://www.spnp.gov.tw/Article.aspx?a=UnxYM%2frAs3Q%3d&lang=1)
東霸尖山好誘人哩。先放著。大休後慢慢走回中霸山屋用午餐。這回換霸基到中霸坪很淫蕩,無法感受腳跟身體連在一起,特別是山上天氣好到不行,曬死我也,行進水一點一滴流逝,不得已跟丸子要水,有夠渴,又累,另一隊元智大學登山隊體力好棒,走好快。
下午往後面退。
伊澤山在路徑旁邊,不曉得站上去風景怎樣。
丸子跟真玲在下面,
那兩坨尖山橫在眼前,
很難相信剛從小霸走回來。
隨著時間漸漸流逝,小腿肚,我的小腿肚,從痠痛到麻痺,麻痺到僵硬,再從僵硬到軟趴。今天肯定會睡很飽。回去 3050高地還有一段很淫蕩的上坡,氣死。一度考慮放棄加利山,在 3050高地睡大覺,不過既然都來了,就讓小腿痛吧。
加利山路程不遠,也有下山捷徑,不過莊主警告我們曾有人在捷徑迷路失蹤,冒險固然刺激,安全也很重要,如果留守人是喜歡的人,怎麼能讓他擔心呢?雖然註定摸晚黑,不過就是晚黑,也不必刻意加快腳步,傍晚腦電波太低,走快反而容易有意外。
反正摸黑都是這套說詞。
四帶
大約六點多回九九山莊,整個被擊垮,超累,腳不聽使喚。
D3 (2012-10-21)
九九山莊 => 大霸登山口 => 大鹿林道
最後一天,大家感情特別好,認識建至大哥跟高大哥夫妻倆,很趣味。其餘老屁股略表不提。今天沒有摸黑壓力,只要拖著殘障的小腿肚往下走就可以了。
腦電波彼此傳染,大家都很低落
最後一天還是熱到爆掉!
伴著長尾鳳蝶,大霸群峰三日行還是得畫下句點,腳雖然很崩潰,但心裡卻依依不捨,爬山的五味雜陳,隨著時間流去,陳年的酸、甜、苦、辣,味道雖然不同,但又很一致。
PS。右腳額足肌無不適,腳趾頭依然無感,腳趾頭力量發揮到極限了,舒服。(2012-10-24) 伸直還是有緊繃感,山上流鼻涕,下山患感冒,等等去看醫生。
2012年10月22日 星期一
走過的山林
2012年
2012-11-08 ~ 13 聖稜線:池有山 (#23)、品田山前峰、大霸尖山霸基、小霸尖山、巴紗拉雲山、素密達山、雪山北峰 (#24)、凱蘭特昆山、北稜角
2012-10-19 ~ 21 大霸尖山霸基 (#19)、小霸尖山 (#20)、伊澤山 (#21)、加利山 (#22)
2012-10-10 草嶺古道
2012-09-22 龜山島401高地
2012-07-27 大屯主峰
2012-07-07 ~ 09 閂山 (#17)、鈴鳴山 (#18)、無明山西峰 (永生難忘,唉)
2012-06-28 ~ 29 畢祿山 (#15)、羊頭山 (#16) 鋸齒連峰、石門山、合歡主峰
2012-06-23 北插天山水源地 (摸摸行)
2012-06-17 棚集山
2012-06-03 大屯主峰、面天山、向天山 (龍頭教官地圖課)
2012-05-26 ~ 27 桃山、品田山 (#14) (公司雜事煩心)
2012-05-19 劍潭山、文間山、剪刀石山、金面山 (公司系統上線)
2012-05-07 松羅湖 (眼鏡蛇)
2012-04-20 檜山 (見到死去的青蛇,好久沒走新的中級山)
2012-04-14 ~ 15 合歡北峰、合歡西峰 (#13) 下華崗
2012-04-08 北插天山
2012-04-04 七星山南峰、主峰
2012-03-24 劍潭山
2012-03-17 ~ 18 郡大山 (#11)、望鄉山、西巒大山 (#12)
2012-03-11 大屯主峰
2012-02-26 ~ 2012-03-01 馬來西亞婆羅洲神山 (神奇)
2012-02-19 七星山主峰
2012-02-11 北插天山水源地 (摸摸行)
2012-02-05 北插天山水源地 (摸摸行)
2012-02-01 石碧潭山、中坑山、牛欄窩山、下橫坑山、雞寮坑山、南何山、南何山南峰、沙坑山、二確山、大肚山 (飛沙縱走)
2012-01-28 七星山主峰
2012-01-15 八仙山、佳保台山
2012-01-07 ~ 08 華崗上合歡主峰 (#10)(刺激)
2011年
2011-12-31 ~ 2012-01-01 北大武山 (#9)
2011-12-17 ~ 18 奇萊北峰 (#7)、奇萊主峰 (#8)
2011-12-11 深按頭山 (大咖家後院)
2011-12-02 ~ 04 桃山 (#5)、詩崙山、喀拉業山 (#6)
2011-11-29 南插北峰(上宇內山)、南插天山、魯培山
2011-11-19 金面山、剪刀石山
2011-11-12 桃源谷下草嶺古道
2011-11-06 天母古道
2011-11-05 露門山西北稜縱走波露山東轉東北稜
2011-10-29 ~ 30 立鷹山、三角山、櫻櫻山、合歡北峰 (#4)、武法奈尾山
2011-10-23 多崖山、北插天山(滿月圓瀑布、小樂佩、傳統路O形)(摸摸行)
2011-10-16 七星山主峰
2011-10-15 鳶嘴山連走稍來山
2011-10-08 ~ 09 東模故山東北稜 (可怕)
2011-09-24 ~ 25 波露山東轉東北稜
2011-09-17 ~ 18 露門山西北稜、大保克山北稜
2011-09-10 滿月圓山、北插天山西北稜、多崖山
2011-08-27 七星山主東峰
2011-08-19 ~ 22 雪山主峰 (#3)、雪山東峰 (#2) 訪翠池
2011-08-14 竹篙山
2011-08-13 卡保山、逐鹿山
2011-08-07 拉卡山
2011-08-06 組合山、樂佩山
2011-08-03 象山六巨石
2011-07-27 南勢角山
2011-07-24 七星山主東峰
2011-07-23 外鳥嘴山、蓮包山
2011-07-16 逐鹿山
2011-07-09 七星山主東峰
2011-06-26 東眼山
2011-06-25 新山夢湖 (未至新山)
2011-06-24 獅仔頭山前峰
2011-06-18 逐鹿山、卡保山
2011-06-14 仙跡岩
2011-06-05 石筍尖
2011-06-04 夫婦山
2011-05-29 面天山、向天山、向天池
2011-05-28 天母古道
2011-05-21 塔曼山
2011-05-14 小獅山
2011-04-23 猴山岳
2011-04-03 北插天山 (人生第一次爬中級山,也是一個人自己爬)
2011-03-05 大屯主南西峰
2010年
2010-12-18 七星山主峰
2010-07-18 草嶺古道
2010-03-07 石門山 (#1)(莫名其妙開始爬百岳)
2012-11-08 ~ 13 聖稜線:池有山 (#23)、品田山前峰、大霸尖山霸基、小霸尖山、巴紗拉雲山、素密達山、雪山北峰 (#24)、凱蘭特昆山、北稜角
2012-10-19 ~ 21 大霸尖山霸基 (#19)、小霸尖山 (#20)、伊澤山 (#21)、加利山 (#22)
2012-10-10 草嶺古道
2012-09-22 龜山島401高地
2012-07-27 大屯主峰
2012-07-07 ~ 09 閂山 (#17)、鈴鳴山 (#18)、無明山西峰 (永生難忘,唉)
2012-06-28 ~ 29 畢祿山 (#15)、羊頭山 (#16) 鋸齒連峰、石門山、合歡主峰
2012-06-23 北插天山水源地 (摸摸行)
2012-06-17 棚集山
2012-06-03 大屯主峰、面天山、向天山 (龍頭教官地圖課)
2012-05-26 ~ 27 桃山、品田山 (#14) (公司雜事煩心)
2012-05-19 劍潭山、文間山、剪刀石山、金面山 (公司系統上線)
2012-05-07 松羅湖 (眼鏡蛇)
2012-04-20 檜山 (見到死去的青蛇,好久沒走新的中級山)
2012-04-14 ~ 15 合歡北峰、合歡西峰 (#13) 下華崗
2012-04-08 北插天山
2012-04-04 七星山南峰、主峰
2012-03-24 劍潭山
2012-03-17 ~ 18 郡大山 (#11)、望鄉山、西巒大山 (#12)
2012-03-11 大屯主峰
2012-02-26 ~ 2012-03-01 馬來西亞婆羅洲神山 (神奇)
2012-02-19 七星山主峰
2012-02-11 北插天山水源地 (摸摸行)
2012-02-05 北插天山水源地 (摸摸行)
2012-02-01 石碧潭山、中坑山、牛欄窩山、下橫坑山、雞寮坑山、南何山、南何山南峰、沙坑山、二確山、大肚山 (飛沙縱走)
2012-01-28 七星山主峰
2012-01-15 八仙山、佳保台山
2012-01-07 ~ 08 華崗上合歡主峰 (#10)(刺激)
2011年
2011-12-31 ~ 2012-01-01 北大武山 (#9)
2011-12-17 ~ 18 奇萊北峰 (#7)、奇萊主峰 (#8)
2011-12-11 深按頭山 (大咖家後院)
2011-12-02 ~ 04 桃山 (#5)、詩崙山、喀拉業山 (#6)
2011-11-29 南插北峰(上宇內山)、南插天山、魯培山
2011-11-19 金面山、剪刀石山
2011-11-12 桃源谷下草嶺古道
2011-11-06 天母古道
2011-11-05 露門山西北稜縱走波露山東轉東北稜
2011-10-29 ~ 30 立鷹山、三角山、櫻櫻山、合歡北峰 (#4)、武法奈尾山
2011-10-23 多崖山、北插天山(滿月圓瀑布、小樂佩、傳統路O形)(摸摸行)
2011-10-16 七星山主峰
2011-10-15 鳶嘴山連走稍來山
2011-10-08 ~ 09 東模故山東北稜 (可怕)
2011-09-24 ~ 25 波露山東轉東北稜
2011-09-17 ~ 18 露門山西北稜、大保克山北稜
2011-09-10 滿月圓山、北插天山西北稜、多崖山
2011-08-27 七星山主東峰
2011-08-19 ~ 22 雪山主峰 (#3)、雪山東峰 (#2) 訪翠池
2011-08-14 竹篙山
2011-08-13 卡保山、逐鹿山
2011-08-07 拉卡山
2011-08-06 組合山、樂佩山
2011-08-03 象山六巨石
2011-07-27 南勢角山
2011-07-24 七星山主東峰
2011-07-23 外鳥嘴山、蓮包山
2011-07-16 逐鹿山
2011-07-09 七星山主東峰
2011-06-26 東眼山
2011-06-25 新山夢湖 (未至新山)
2011-06-24 獅仔頭山前峰
2011-06-18 逐鹿山、卡保山
2011-06-14 仙跡岩
2011-06-05 石筍尖
2011-06-04 夫婦山
2011-05-29 面天山、向天山、向天池
2011-05-28 天母古道
2011-05-21 塔曼山
2011-05-14 小獅山
2011-04-23 猴山岳
2011-04-03 北插天山 (人生第一次爬中級山,也是一個人自己爬)
2011-03-05 大屯主南西峰
2010年
2010-12-18 七星山主峰
2010-07-18 草嶺古道
2010-03-07 石門山 (#1)(莫名其妙開始爬百岳)
2012年10月17日 星期三
Refactoring - TWSE Statistics
首先把共用的部分拆出來。Copy/Paste 不是好的 reuse 技術,倘若程式有錯,常常改了A忘了改 A。這樣就很麻煩。與其做兩次以上功,不如抽出來。
./src/common/sourcing_twse.py
# coding: big5
import csv
import logging
import os
import shutil
import sqlite3
from datetime import date
from datetime import datetime
class SourcingTwse():
def __init__(self):
self.LOGGER = logging.getLogger()
self.URL_TEMPLATE = ''
self.DATES = []
self.ZIP_DIR = ''
self.XLS_DIR = ''
self.CSV_DIR = ''
self.DB_FILE = './db/stocktotal.db'
self.SQL_INSERT = ''
def init_dates(self, begin_date, end_date):
begin = datetime.strptime(begin_date, '%Y-%m-%d')
end = datetime.strptime(end_date, '%Y-%m-%d')
monthly_begin = 12 * begin.year + begin.month - 1
monthly_end = 12 * end.year + end.month
for monthly in range(monthly_begin, monthly_end):
year, month = divmod(monthly, 12)
self.DATES.append(date(year, month + 1, 1))
def source_url_to_zip(self, dest_dir):
if not os.path.exists(dest_dir):
os.makedirs(dest_dir)
for date in self.DATES:
url = self.URL_TEMPLATE % date.strftime('%Y%m')
dest_file = self.get_filename(dest_dir, date, 'zip')
self.__wget(url, dest_file)
def source_zip_to_xls(self, src_dir, dest_dir):
assert os.path.isdir(src_dir)
if not os.path.exists(dest_dir):
os.makedirs(dest_dir)
for date in self.DATES:
src_file = self.get_filename(src_dir, date, 'zip')
dest_file = self.get_filename(dest_dir, date, 'xls')
self.source_zip_to_xls_single(src_file, dest_dir, dest_file)
def source_zip_to_xls_single(self, src_file, dest_dir, dest_file):
assert os.path.isfile(src_file)
assert os.path.isdir(dest_dir)
sevenzip_output_dir = os.path.join(dest_dir, 'sevenzip_output_dir')
self.__sevenzip_extract(src_file, sevenzip_output_dir)
if not os.path.exists(sevenzip_output_dir):
self.LOGGER.info('''%s => Failure to extract''' % src_file)
return
file_list = os.listdir(sevenzip_output_dir)
assert len(file_list) is 1
sevenzip_output_file = os.path.join(sevenzip_output_dir, file_list[0])
shutil.copy(sevenzip_output_file, dest_file)
shutil.rmtree(sevenzip_output_dir)
def source_csv_to_sqlite(self, src_dir, dest_db, sql_insert):
assert os.path.isdir(src_dir)
assert os.path.isfile(dest_db)
for date in self.DATES:
src_file = self.get_filename(src_dir, date, 'csv')
if os.path.isfile(src_file):
self.source_csv_to_sqlite_single(src_file, dest_db, sql_insert)
def source_csv_to_sqlite_single(self, src_file, dest_db, sql_insert):
self.LOGGER.debug('''%s => %s''' % (src_file, dest_db))
fd = open(src_file, 'r')
csv_reader = csv.reader(fd)
conn = sqlite3.connect(dest_db)
cursor = conn.cursor()
for row in csv_reader:
cursor.execute(sql_insert, row)
self.LOGGER.debug(row)
conn.commit()
cursor.close()
conn.close()
fd.close()
def get_filename(self, src_dir, date, ext):
return os.path.join(src_dir, date.strftime('%Y-%m') + '.' + ext)
def __wget(self, url, dest_file):
wget = os.path.abspath('./src/thirdparty/wget/wget.exe')
assert os.path.isfile(wget)
wget_cmdline = '''%s -N \"%s\" --waitretry=3 -O \"%s\"''' % (wget, url, dest_file)
os.system(wget_cmdline)
def __sevenzip_extract(self, src_file, dest_dir):
sevenzip = os.path.abspath('./src/thirdparty/sevenzip/7z.exe')
assert os.path.isfile(sevenzip)
sevenzip_cmdline = '''%s e %s -y -o%s''' % (sevenzip, src_file, dest_dir)
os.system(sevenzip_cmdline)
接著我可以專心處理 EXCEL => SQLite 的中繼檔 ((CSV))。
./src/market_statistics/sourcing.py
# coding: big5
import csv
import logging
import os
import xlrd
from datetime import date
from datetime import datetime
from ..common import sourcing_twse
class Sourcing(sourcing_twse.SourcingTwse):
def __init__(self):
self.LOGGER = logging.getLogger()
self.URL_TEMPLATE = '''http://www.twse.com.tw/ch/inc/download.php?l1=Securities+Trading+Monthly+Statistics&l2=Statistics+of+Securities+Market&url=/ch/statistics/download/02/001/%s_C02001.zip'''
self.DATES = []
self.ZIP_DIR = '''./dataset/market_statistics/zip/'''
self.XLS_DIR = '''./dataset/market_statistics/xls/'''
self.CSV_DIR = '''./dataset/market_statistics/csv/'''
self.DB_FILE = './db/stocktotal.db'
self.SQL_INSERT = '''insert or ignore into MarketStatistics(
report_date,
activity_date,
report_type,
total_trading_value,
listed_co_number,
capital_issued,
total_listed_shares,
market_capitalization,
trading_volume,
trading_value,
trans_number,
average_taiex,
volume_turnover_rate,
per,
dividend_yield,
pbr,
trading_days
) values(?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)'''
def source(self, begin_date, end_date):
sourcing_twse.SourcingTwse.init_dates(self, begin_date, end_date)
sourcing_twse.SourcingTwse.source_url_to_zip(self, self.ZIP_DIR)
sourcing_twse.SourcingTwse.source_zip_to_xls(self, self.ZIP_DIR, self.XLS_DIR)
self.source_xls_to_csv(self.XLS_DIR, self.CSV_DIR)
sourcing_twse.SourcingTwse.source_csv_to_sqlite(self, self.CSV_DIR, self.DB_FILE, self.SQL_INSERT)
def source_xls_to_csv(self, src_dir, dest_dir):
assert os.path.isdir(src_dir)
if not os.path.exists(dest_dir):
os.makedirs(dest_dir)
for date in reversed(self.DATES):
src_file = sourcing_twse.SourcingTwse.get_filename(self, src_dir, date, 'xls')
self.source_xls_to_csv_single(src_file, dest_dir, date)
def source_xls_to_csv_single(self, src_file, dest_dir, date):
assert os.path.isfile(src_file)
assert os.path.isdir(dest_dir)
self.__source_v2_xls_to_csv_single(src_file, dest_dir, date)
self.__source_v1_xls_to_csv_single(src_file, dest_dir, date)
def __source_v2_xls_to_csv_single(self, src_file, dest_dir, date):
if date < datetime(2003, 6, 1).date():
return
book = xlrd.open_workbook(src_file)
sheet = book.sheet_by_index(0)
assert sheet.ncols is 15
assert sheet.cell(12, 14).value == 'Days'
assert sheet.cell(12, 0).value.strip() == 'Month'
dest_file = sourcing_twse.SourcingTwse.get_filename(self, dest_dir, date, 'csv')
fd = open(dest_file, 'w', newline='')
csv_writer = csv.writer(fd)
for r in self.__build_sheet_records(sheet, 13):
r = [date.strftime('%Y-%m-%d')] + r
r = self.__remove_comment_mark(r)
assert len(r) is 17
csv_writer.writerow(r)
self.LOGGER.debug('''%s => %s''' % (r, dest_file))
fd.close()
def __source_v1_xls_to_csv_single(self, src_file, dest_dir, date):
if date >= datetime(2003, 6, 1).date() or date <= datetime(2000, 9, 1).date():
return
book = xlrd.open_workbook(src_file)
main_sheet = book.sheet_by_index(0)
assert main_sheet.ncols is 12
if date > datetime(2001, 6, 1).date():
assert main_sheet.cell(12, 0).value.strip() == 'Month'
elif date > datetime(2000, 9, 1).date():
assert main_sheet.cell(11, 0).value.strip() == 'Month'
assert main_sheet.cell(12, 0).value.strip() == ''
main_records = self.__build_sheet_records(main_sheet, 13)
rest_sheet = book.sheet_by_index(1)
assert rest_sheet.ncols is 13
assert rest_sheet.cell(10, 0).value.strip() == 'Month'
rest_records = self.__build_sheet_records(rest_sheet, 11)
assert len(main_records) == len(rest_records)
dest_file = sourcing_twse.SourcingTwse.get_filename(self, dest_dir, date, 'csv')
fd = open(dest_file, 'w', newline='')
csv_writer = csv.writer(fd)
for i in range(len(main_records)):
assert len(main_records[i]) is 13
assert len(rest_records[i]) is 14
assert main_records[i][0] == rest_records[i][0]
assert main_records[i][1] == rest_records[i][1]
r = [date.strftime('%Y-%m-%d')] + \
main_records[i][:-2] + rest_records[i][2:6] + rest_records[i][-2:-1]
r = self.__remove_comment_mark(r)
assert len(r) is 17
csv_writer.writerow(r)
self.LOGGER.debug('''%s => %s''' % (r, dest_file))
fd.close()
def __build_sheet_records(self, sheet, begin_row):
rv = []
monthly_curr_year = ''
for curr_row in range(begin_row, sheet.nrows):
r = sheet.row_values(curr_row)
first_cell = r[0].strip()
if first_cell.startswith('註'): # Check footer.
break
if first_cell.endswith(')月'): # Ignore this year summary because it is partial.
continue
if first_cell.endswith(')'): # Check if yearly record. Example: 93(2004)
curr_date = '''%s-01-01''' % first_cell[first_cell.index('(')+1 : -1]
sheet_record = [curr_date, 'yearly'] + r[1:]
rv.append(sheet_record)
if first_cell.endswith('月'): # Check if monthly record. Example: 95年 1月
curr_month = 0
if '年' in first_cell:
monthly_curr_year = int(first_cell[:first_cell.index('年')]) + 1911
curr_month = int(first_cell[first_cell.index('年')+1 : first_cell.index('月')])
else:
curr_month = int(first_cell[:first_cell.index('月')])
curr_date = '''%s-%02d-01''' % (monthly_curr_year, curr_month)
sheet_record = [curr_date, 'monthly'] + r[1:]
rv.append(sheet_record)
return rv
def __remove_comment_mark(self, csv_record):
rv = csv_record[:3]
for i in range(3, len(csv_record)):
value = csv_record[i]
try:
float(value)
rv.append(value)
except ValueError:
fixed_value = value[value.rindex(' ')+ 1 :].replace(',', '')
float(fixed_value)
rv.append(fixed_value)
return rv
./src/listed_co_statistics/sourcing.py
# coding: big5
import csv
import logging
import os
import xlrd
from datetime import date
from datetime import datetime
from ..common import sourcing_twse
from ..common import str_util as str_util
class Sourcing(sourcing_twse.SourcingTwse):
def __init__(self):
self.LOGGER = logging.getLogger()
self.URL_TEMPLATE = '''http://www.twse.com.tw/ch/inc/download.php?l1=Listed+Companies+Monthly+Statistics&l2=P%%2FE+Ratio+%%26+Yield+of+Listed+Stocks&url=/ch/statistics/download/04/001/%s_C04001.zip'''
self.DATES = []
self.ZIP_DIR = '''./dataset/listed_co_statistics/zip/'''
self.XLS_DIR = '''./dataset/listed_co_statistics/xls/'''
self.CSV_DIR = '''./dataset/listed_co_statistics/csv/'''
self.DB_FILE = './db/stocktotal.db'
self.SQL_INSERT = '''insert or ignore into ListedCoStatistics(
report_date,
stock_code,
latest_price,
per,
yield,
pbr
) values(?, ?, ?, ?, ?, ?)'''
def source(self, begin_date, end_date):
sourcing_twse.SourcingTwse.init_dates(self, begin_date, end_date)
sourcing_twse.SourcingTwse.source_url_to_zip(self, self.ZIP_DIR)
sourcing_twse.SourcingTwse.source_zip_to_xls(self, self.ZIP_DIR, self.XLS_DIR)
self.source_xls_to_csv(self.XLS_DIR, self.CSV_DIR)
sourcing_twse.SourcingTwse.source_csv_to_sqlite(self, self.CSV_DIR, self.DB_FILE, self.SQL_INSERT)
def source_xls_to_csv(self, src_dir, dest_dir):
assert os.path.isdir(src_dir)
if not os.path.exists(dest_dir):
os.makedirs(dest_dir)
for date in reversed(self.DATES):
src_file = sourcing_twse.SourcingTwse.get_filename(self, src_dir, date, 'xls')
self.source_xls_to_csv_single(src_file, dest_dir, date)
# CSV fields should contains: Report Date, Stock's Code, Latest Price, PER, Yield, PBR
def source_xls_to_csv_single(self, src_file, dest_dir, date):
assert os.path.isfile(src_file)
assert os.path.isdir(dest_dir)
self.__source_v3_xls_to_csv_single(src_file, dest_dir, date)
self.__source_v2_xls_to_csv_single(src_file, dest_dir, date)
self.__source_v1_xls_to_csv_single(src_file, dest_dir, date)
def __source_v3_xls_to_csv_single(self, src_file, dest_dir, date):
if date < datetime(2007, 4, 1).date():
return
book = xlrd.open_workbook(src_file)
sheet = book.sheet_by_index(0)
assert sheet.ncols is 10
assert sheet.cell(4, 0).value.strip() == 'Code & Name'
assert sheet.cell(4, 8).value.strip() in ('PBR', 'PBR')
dest_file = sourcing_twse.SourcingTwse.get_filename(self, dest_dir, date, 'csv')
fd = open(dest_file, 'w', newline='')
csv_writer = csv.writer(fd)
for r in self.__build_sheet_records(sheet, 0, 5):
r = [date.strftime('%Y-%m-%d')] + r
assert len(r) is 6
csv_writer.writerow(r)
self.LOGGER.debug('''%s => %s''' % (r, dest_file))
fd.close()
def __source_v2_xls_to_csv_single(self, src_file, dest_dir, date):
if date >= datetime(2007, 4, 1).date() or date < datetime(2000, 9, 1).date():
return
book = xlrd.open_workbook(src_file)
sheet = book.sheet_by_index(0)
assert sheet.ncols is 21
assert sheet.cell(4, 0).value.strip() in ('Code & Name', 'CODE & NAME')
assert sheet.cell(4, 11).value.strip() in ('Code & Name', 'CODE & NAME')
assert sheet.cell(4, 8).value.strip() in ('PBR', 'PBR')
assert sheet.cell(4, 19).value.strip() in ('PBR', 'PBR')
dest_file = sourcing_twse.SourcingTwse.get_filename(self, dest_dir, date, 'csv')
fd = open(dest_file, 'w', newline='')
csv_writer = csv.writer(fd)
for r in self.__build_sheet_records(sheet, 0, 5):
r = [date.strftime('%Y-%m-%d')] + r
assert len(r) is 6
csv_writer.writerow(r)
self.LOGGER.debug('''%s => %s''' % (r, dest_file))
for r in self.__build_sheet_records(sheet, 11, 5):
r = [date.strftime('%Y-%m-%d')] + r
assert len(r) is 6
csv_writer.writerow(r)
self.LOGGER.debug('''%s => %s''' % (r, dest_file))
fd.close()
def __source_v1_xls_to_csv_single(self, src_file, dest_dir, date):
if date >= datetime(2000, 9, 1).date():
return
book = xlrd.open_workbook(src_file)
sheet = book.sheet_by_index(0)
if date == datetime(2000, 5, 1).date():
header_last_row = 5
elif date <= datetime(1999, 7, 1).date():
header_last_row = 8
else:
header_last_row = 4
assert sheet.ncols in (17, 11)
assert sheet.cell(header_last_row, 0).value.strip() in ('Code & Name', 'CODE & NAME')
assert sheet.cell(header_last_row, 6).value.strip() in ('Code & Name', 'CODE & NAME')
assert sheet.cell(header_last_row, 4).value.strip() in ('PBR', 'PBR')
assert sheet.cell(header_last_row, 10).value.strip() in ('PBR', 'PBR')
dest_file = sourcing_twse.SourcingTwse.get_filename(self, dest_dir, date, 'csv')
fd = open(dest_file, 'w', newline='')
csv_writer = csv.writer(fd)
begin_row = header_last_row + 1
for r in self.__build_bad_sheet_records(sheet, 0, begin_row):
r = [date.strftime('%Y-%m-%d')] + r
assert len(r) is 6
csv_writer.writerow(r)
self.LOGGER.debug('''%s => %s''' % (r, dest_file))
for r in self.__build_bad_sheet_records(sheet, 6, begin_row):
r = [date.strftime('%Y-%m-%d')] + r
assert len(r) is 6
csv_writer.writerow(r)
self.LOGGER.debug('''%s => %s''' % (r, dest_file))
fd.close()
def __build_sheet_records(self, sheet, begin_col, begin_row):
for curr_row in range(begin_row, sheet.nrows):
r = sheet.row_values(curr_row)
first_cell = r[begin_col]
if r[begin_col] == '':
continue
if r[begin_col + 3] == '' and r[begin_col + 5] == '' \
and r[begin_col + 7] == '' and r[begin_col + 9] == '':
continue
if isinstance(first_cell, float):
first_cell = int(first_cell)
elif isinstance(first_cell, str):
first_cell = first_cell.replace(' ','')
yield [first_cell, r[begin_col + 3], r[begin_col + 5], r[begin_col + 7], r[begin_col + 9]]
def __build_bad_sheet_records(self, sheet, begin_col, begin_row):
for curr_row in range(begin_row, sheet.nrows):
r = sheet.row_values(curr_row)
stock_code = self.__fix_stock_code(r[begin_col])
latest_price = self.__fix_real_number(r[begin_col + 1])
per = self.__fix_real_number(r[begin_col + 2])
dividend_yield = self.__fix_real_number(r[begin_col + 3])
pbr = self.__fix_real_number(r[begin_col + 4])
if stock_code == '':
continue
if latest_price == '' and per == '' and dividend_yield == '' and pbr == '':
continue
yield [stock_code, latest_price, per, dividend_yield, pbr]
def __fix_stock_code(self, bad_stock_code):
space_removed = bad_stock_code.replace(' ','')
stock_code = space_removed[0:4]
if stock_code.isdigit(): # Quickly get possible stock_code
return stock_code
return space_removed
def __fix_real_number(self, bad_str):
if str_util.is_float(bad_str):
return float(bad_str)
assert str_util.is_str(bad_str)
splitted = bad_str.split()
for test_str in splitted:
if str_util.is_float(test_str):
return float(test_str)
return ''
./src/common/str_util.py
def is_float(test_str):
try:
float(test_str)
return True
except ValueError:
return False
def is_str(test_str):
try:
str(test_str)
return True
except ValueError:
return False
./src/common/date_util.py
import datetime
def get_last_month():
today = datetime.date.today()
first = datetime.date(day=1, month=today.month, year=today.year)
last_month = first - datetime.timedelta(days=1)
return datetime.date(day=1, month=last_month.month, year=last_month.year)
def get_this_month():
today = datetime.date.today()
return datetime.date(day=1, month=today.month, year=today.year)
def get_yesterday():
return datetime.date.today() - datetime.timedelta(days=1)
SQLite3 Schema:
create table if not exists MarketStatistics
(
creation_dt datetime default current_timestamp,
report_date datetime not null,
activity_date datetime not null,
report_type text not null,
total_trading_value real,
listed_co_number real,
capital_issued real,
total_listed_shares real,
market_capitalization real,
trading_volume real,
trading_value real,
trans_number real,
average_taiex real,
volume_turnover_rate real,
per real,
dividend_yield real,
pbr real,
trading_days int,
unique (report_date, activity_date, report_type) on conflict ignore
);
create table if not exists ListedCoStatistics
(
creation_dt datetime default current_timestamp,
report_date datetime not null,
stock_code text not null,
latest_price real,
per real,
yield real,
pbr real,
unique (report_date, stock_code) on conflict ignore
);
最後是 sourcing 操作界面,原來台塑也有天天都便宜的時候。
./source_market_statistics.py
import logging
import sys
import src.market_statistics.sourcing as sourcing
import src.common.logger as logger
import src.common.date_util as date_util
FIRST_DAY = '1999-01-01'
def source_all():
logger.config_root(level=logging.DEBUG)
last_month = str(date_util.get_last_month())
s = sourcing.Sourcing()
s.source(FIRST_DAY, last_month)
def source_last_month():
logger.config_root(level=logging.DEBUG)
last_month = str(date_util.get_last_month())
s = sourcing.Sourcing()
s.source(last_month, last_month)
def source_csv_to_sqlite_all():
logger.config_root(level=logging.DEBUG)
last_month = str(date_util.get_last_month())
s = sourcing.Sourcing()
s.init_dates(FIRST_DAY, last_month)
s.source_csv_to_sqlite(s.CSV_DIR, s.DB_FILE, s.SQL_INSERT)
def test():
logger.config_root(level=logging.DEBUG)
s = sourcing.Sourcing()
#s.source('1999-01-01', '2012-09-01')
#s.source('2012-09-01', '2012-09-01')
#s.source('2003-05-01', '2003-05-01')
def main():
source_last_month()
if __name__ == '__main__':
sys.exit(main())
./source_listed_co_statistics.py
import logging
import sys
import src.listed_co_statistics.sourcing as sourcing
import src.common.logger as logger
import src.common.date_util as date_util
FIRST_DAY = '1999-03-01'
def source_all():
logger.config_root(level=logging.DEBUG)
last_month = str(date_util.get_last_month())
s = sourcing.Sourcing()
s.source(FIRST_DAY, last_month)
def source_last_month():
logger.config_root(level=logging.DEBUG)
last_month = str(date_util.get_last_month())
s = sourcing.Sourcing()
s.source(last_month, last_month)
def source_csv_to_sqlite_all():
logger.config_root(level=logging.DEBUG)
last_month = str(date_util.get_last_month())
s = sourcing.Sourcing()
s.init_dates(FIRST_DAY, last_month)
s.source_csv_to_sqlite(s.CSV_DIR, s.DB_FILE, s.SQL_INSERT)
def test():
logger.config_root(level=logging.DEBUG)
s = sourcing.Sourcing()
#s.source('2000-09-01', '2000-09-01') # for the last report for 21 cols
#s.source('2000-08-01', '2000-08-01') # for the first report for dirty cols
#s.source('2012-09-01', '2012-09-01') # for this month
def main():
source_last_month()
if __name__ == '__main__':
sys.exit(main())
./src/common/sourcing_twse.py
# coding: big5
import csv
import logging
import os
import shutil
import sqlite3
from datetime import date
from datetime import datetime
class SourcingTwse():
def __init__(self):
self.LOGGER = logging.getLogger()
self.URL_TEMPLATE = ''
self.DATES = []
self.ZIP_DIR = ''
self.XLS_DIR = ''
self.CSV_DIR = ''
self.DB_FILE = './db/stocktotal.db'
self.SQL_INSERT = ''
def init_dates(self, begin_date, end_date):
begin = datetime.strptime(begin_date, '%Y-%m-%d')
end = datetime.strptime(end_date, '%Y-%m-%d')
monthly_begin = 12 * begin.year + begin.month - 1
monthly_end = 12 * end.year + end.month
for monthly in range(monthly_begin, monthly_end):
year, month = divmod(monthly, 12)
self.DATES.append(date(year, month + 1, 1))
def source_url_to_zip(self, dest_dir):
if not os.path.exists(dest_dir):
os.makedirs(dest_dir)
for date in self.DATES:
url = self.URL_TEMPLATE % date.strftime('%Y%m')
dest_file = self.get_filename(dest_dir, date, 'zip')
self.__wget(url, dest_file)
def source_zip_to_xls(self, src_dir, dest_dir):
assert os.path.isdir(src_dir)
if not os.path.exists(dest_dir):
os.makedirs(dest_dir)
for date in self.DATES:
src_file = self.get_filename(src_dir, date, 'zip')
dest_file = self.get_filename(dest_dir, date, 'xls')
self.source_zip_to_xls_single(src_file, dest_dir, dest_file)
def source_zip_to_xls_single(self, src_file, dest_dir, dest_file):
assert os.path.isfile(src_file)
assert os.path.isdir(dest_dir)
sevenzip_output_dir = os.path.join(dest_dir, 'sevenzip_output_dir')
self.__sevenzip_extract(src_file, sevenzip_output_dir)
if not os.path.exists(sevenzip_output_dir):
self.LOGGER.info('''%s => Failure to extract''' % src_file)
return
file_list = os.listdir(sevenzip_output_dir)
assert len(file_list) is 1
sevenzip_output_file = os.path.join(sevenzip_output_dir, file_list[0])
shutil.copy(sevenzip_output_file, dest_file)
shutil.rmtree(sevenzip_output_dir)
def source_csv_to_sqlite(self, src_dir, dest_db, sql_insert):
assert os.path.isdir(src_dir)
assert os.path.isfile(dest_db)
for date in self.DATES:
src_file = self.get_filename(src_dir, date, 'csv')
if os.path.isfile(src_file):
self.source_csv_to_sqlite_single(src_file, dest_db, sql_insert)
def source_csv_to_sqlite_single(self, src_file, dest_db, sql_insert):
self.LOGGER.debug('''%s => %s''' % (src_file, dest_db))
fd = open(src_file, 'r')
csv_reader = csv.reader(fd)
conn = sqlite3.connect(dest_db)
cursor = conn.cursor()
for row in csv_reader:
cursor.execute(sql_insert, row)
self.LOGGER.debug(row)
conn.commit()
cursor.close()
conn.close()
fd.close()
def get_filename(self, src_dir, date, ext):
return os.path.join(src_dir, date.strftime('%Y-%m') + '.' + ext)
def __wget(self, url, dest_file):
wget = os.path.abspath('./src/thirdparty/wget/wget.exe')
assert os.path.isfile(wget)
wget_cmdline = '''%s -N \"%s\" --waitretry=3 -O \"%s\"''' % (wget, url, dest_file)
os.system(wget_cmdline)
def __sevenzip_extract(self, src_file, dest_dir):
sevenzip = os.path.abspath('./src/thirdparty/sevenzip/7z.exe')
assert os.path.isfile(sevenzip)
sevenzip_cmdline = '''%s e %s -y -o%s''' % (sevenzip, src_file, dest_dir)
os.system(sevenzip_cmdline)
接著我可以專心處理 EXCEL => SQLite 的中繼檔 ((CSV))。
./src/market_statistics/sourcing.py
# coding: big5
import csv
import logging
import os
import xlrd
from datetime import date
from datetime import datetime
from ..common import sourcing_twse
class Sourcing(sourcing_twse.SourcingTwse):
def __init__(self):
self.LOGGER = logging.getLogger()
self.URL_TEMPLATE = '''http://www.twse.com.tw/ch/inc/download.php?l1=Securities+Trading+Monthly+Statistics&l2=Statistics+of+Securities+Market&url=/ch/statistics/download/02/001/%s_C02001.zip'''
self.DATES = []
self.ZIP_DIR = '''./dataset/market_statistics/zip/'''
self.XLS_DIR = '''./dataset/market_statistics/xls/'''
self.CSV_DIR = '''./dataset/market_statistics/csv/'''
self.DB_FILE = './db/stocktotal.db'
self.SQL_INSERT = '''insert or ignore into MarketStatistics(
report_date,
activity_date,
report_type,
total_trading_value,
listed_co_number,
capital_issued,
total_listed_shares,
market_capitalization,
trading_volume,
trading_value,
trans_number,
average_taiex,
volume_turnover_rate,
per,
dividend_yield,
pbr,
trading_days
) values(?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)'''
def source(self, begin_date, end_date):
sourcing_twse.SourcingTwse.init_dates(self, begin_date, end_date)
sourcing_twse.SourcingTwse.source_url_to_zip(self, self.ZIP_DIR)
sourcing_twse.SourcingTwse.source_zip_to_xls(self, self.ZIP_DIR, self.XLS_DIR)
self.source_xls_to_csv(self.XLS_DIR, self.CSV_DIR)
sourcing_twse.SourcingTwse.source_csv_to_sqlite(self, self.CSV_DIR, self.DB_FILE, self.SQL_INSERT)
def source_xls_to_csv(self, src_dir, dest_dir):
assert os.path.isdir(src_dir)
if not os.path.exists(dest_dir):
os.makedirs(dest_dir)
for date in reversed(self.DATES):
src_file = sourcing_twse.SourcingTwse.get_filename(self, src_dir, date, 'xls')
self.source_xls_to_csv_single(src_file, dest_dir, date)
def source_xls_to_csv_single(self, src_file, dest_dir, date):
assert os.path.isfile(src_file)
assert os.path.isdir(dest_dir)
self.__source_v2_xls_to_csv_single(src_file, dest_dir, date)
self.__source_v1_xls_to_csv_single(src_file, dest_dir, date)
def __source_v2_xls_to_csv_single(self, src_file, dest_dir, date):
if date < datetime(2003, 6, 1).date():
return
book = xlrd.open_workbook(src_file)
sheet = book.sheet_by_index(0)
assert sheet.ncols is 15
assert sheet.cell(12, 14).value == 'Days'
assert sheet.cell(12, 0).value.strip() == 'Month'
dest_file = sourcing_twse.SourcingTwse.get_filename(self, dest_dir, date, 'csv')
fd = open(dest_file, 'w', newline='')
csv_writer = csv.writer(fd)
for r in self.__build_sheet_records(sheet, 13):
r = [date.strftime('%Y-%m-%d')] + r
r = self.__remove_comment_mark(r)
assert len(r) is 17
csv_writer.writerow(r)
self.LOGGER.debug('''%s => %s''' % (r, dest_file))
fd.close()
def __source_v1_xls_to_csv_single(self, src_file, dest_dir, date):
if date >= datetime(2003, 6, 1).date() or date <= datetime(2000, 9, 1).date():
return
book = xlrd.open_workbook(src_file)
main_sheet = book.sheet_by_index(0)
assert main_sheet.ncols is 12
if date > datetime(2001, 6, 1).date():
assert main_sheet.cell(12, 0).value.strip() == 'Month'
elif date > datetime(2000, 9, 1).date():
assert main_sheet.cell(11, 0).value.strip() == 'Month'
assert main_sheet.cell(12, 0).value.strip() == ''
main_records = self.__build_sheet_records(main_sheet, 13)
rest_sheet = book.sheet_by_index(1)
assert rest_sheet.ncols is 13
assert rest_sheet.cell(10, 0).value.strip() == 'Month'
rest_records = self.__build_sheet_records(rest_sheet, 11)
assert len(main_records) == len(rest_records)
dest_file = sourcing_twse.SourcingTwse.get_filename(self, dest_dir, date, 'csv')
fd = open(dest_file, 'w', newline='')
csv_writer = csv.writer(fd)
for i in range(len(main_records)):
assert len(main_records[i]) is 13
assert len(rest_records[i]) is 14
assert main_records[i][0] == rest_records[i][0]
assert main_records[i][1] == rest_records[i][1]
r = [date.strftime('%Y-%m-%d')] + \
main_records[i][:-2] + rest_records[i][2:6] + rest_records[i][-2:-1]
r = self.__remove_comment_mark(r)
assert len(r) is 17
csv_writer.writerow(r)
self.LOGGER.debug('''%s => %s''' % (r, dest_file))
fd.close()
def __build_sheet_records(self, sheet, begin_row):
rv = []
monthly_curr_year = ''
for curr_row in range(begin_row, sheet.nrows):
r = sheet.row_values(curr_row)
first_cell = r[0].strip()
if first_cell.startswith('註'): # Check footer.
break
if first_cell.endswith(')月'): # Ignore this year summary because it is partial.
continue
if first_cell.endswith(')'): # Check if yearly record. Example: 93(2004)
curr_date = '''%s-01-01''' % first_cell[first_cell.index('(')+1 : -1]
sheet_record = [curr_date, 'yearly'] + r[1:]
rv.append(sheet_record)
if first_cell.endswith('月'): # Check if monthly record. Example: 95年 1月
curr_month = 0
if '年' in first_cell:
monthly_curr_year = int(first_cell[:first_cell.index('年')]) + 1911
curr_month = int(first_cell[first_cell.index('年')+1 : first_cell.index('月')])
else:
curr_month = int(first_cell[:first_cell.index('月')])
curr_date = '''%s-%02d-01''' % (monthly_curr_year, curr_month)
sheet_record = [curr_date, 'monthly'] + r[1:]
rv.append(sheet_record)
return rv
def __remove_comment_mark(self, csv_record):
rv = csv_record[:3]
for i in range(3, len(csv_record)):
value = csv_record[i]
try:
float(value)
rv.append(value)
except ValueError:
fixed_value = value[value.rindex(' ')+ 1 :].replace(',', '')
float(fixed_value)
rv.append(fixed_value)
return rv
./src/listed_co_statistics/sourcing.py
# coding: big5
import csv
import logging
import os
import xlrd
from datetime import date
from datetime import datetime
from ..common import sourcing_twse
from ..common import str_util as str_util
class Sourcing(sourcing_twse.SourcingTwse):
def __init__(self):
self.LOGGER = logging.getLogger()
self.URL_TEMPLATE = '''http://www.twse.com.tw/ch/inc/download.php?l1=Listed+Companies+Monthly+Statistics&l2=P%%2FE+Ratio+%%26+Yield+of+Listed+Stocks&url=/ch/statistics/download/04/001/%s_C04001.zip'''
self.DATES = []
self.ZIP_DIR = '''./dataset/listed_co_statistics/zip/'''
self.XLS_DIR = '''./dataset/listed_co_statistics/xls/'''
self.CSV_DIR = '''./dataset/listed_co_statistics/csv/'''
self.DB_FILE = './db/stocktotal.db'
self.SQL_INSERT = '''insert or ignore into ListedCoStatistics(
report_date,
stock_code,
latest_price,
per,
yield,
pbr
) values(?, ?, ?, ?, ?, ?)'''
def source(self, begin_date, end_date):
sourcing_twse.SourcingTwse.init_dates(self, begin_date, end_date)
sourcing_twse.SourcingTwse.source_url_to_zip(self, self.ZIP_DIR)
sourcing_twse.SourcingTwse.source_zip_to_xls(self, self.ZIP_DIR, self.XLS_DIR)
self.source_xls_to_csv(self.XLS_DIR, self.CSV_DIR)
sourcing_twse.SourcingTwse.source_csv_to_sqlite(self, self.CSV_DIR, self.DB_FILE, self.SQL_INSERT)
def source_xls_to_csv(self, src_dir, dest_dir):
assert os.path.isdir(src_dir)
if not os.path.exists(dest_dir):
os.makedirs(dest_dir)
for date in reversed(self.DATES):
src_file = sourcing_twse.SourcingTwse.get_filename(self, src_dir, date, 'xls')
self.source_xls_to_csv_single(src_file, dest_dir, date)
# CSV fields should contains: Report Date, Stock's Code, Latest Price, PER, Yield, PBR
def source_xls_to_csv_single(self, src_file, dest_dir, date):
assert os.path.isfile(src_file)
assert os.path.isdir(dest_dir)
self.__source_v3_xls_to_csv_single(src_file, dest_dir, date)
self.__source_v2_xls_to_csv_single(src_file, dest_dir, date)
self.__source_v1_xls_to_csv_single(src_file, dest_dir, date)
def __source_v3_xls_to_csv_single(self, src_file, dest_dir, date):
if date < datetime(2007, 4, 1).date():
return
book = xlrd.open_workbook(src_file)
sheet = book.sheet_by_index(0)
assert sheet.ncols is 10
assert sheet.cell(4, 0).value.strip() == 'Code & Name'
assert sheet.cell(4, 8).value.strip() in ('PBR', 'PBR')
dest_file = sourcing_twse.SourcingTwse.get_filename(self, dest_dir, date, 'csv')
fd = open(dest_file, 'w', newline='')
csv_writer = csv.writer(fd)
for r in self.__build_sheet_records(sheet, 0, 5):
r = [date.strftime('%Y-%m-%d')] + r
assert len(r) is 6
csv_writer.writerow(r)
self.LOGGER.debug('''%s => %s''' % (r, dest_file))
fd.close()
def __source_v2_xls_to_csv_single(self, src_file, dest_dir, date):
if date >= datetime(2007, 4, 1).date() or date < datetime(2000, 9, 1).date():
return
book = xlrd.open_workbook(src_file)
sheet = book.sheet_by_index(0)
assert sheet.ncols is 21
assert sheet.cell(4, 0).value.strip() in ('Code & Name', 'CODE & NAME')
assert sheet.cell(4, 11).value.strip() in ('Code & Name', 'CODE & NAME')
assert sheet.cell(4, 8).value.strip() in ('PBR', 'PBR')
assert sheet.cell(4, 19).value.strip() in ('PBR', 'PBR')
dest_file = sourcing_twse.SourcingTwse.get_filename(self, dest_dir, date, 'csv')
fd = open(dest_file, 'w', newline='')
csv_writer = csv.writer(fd)
for r in self.__build_sheet_records(sheet, 0, 5):
r = [date.strftime('%Y-%m-%d')] + r
assert len(r) is 6
csv_writer.writerow(r)
self.LOGGER.debug('''%s => %s''' % (r, dest_file))
for r in self.__build_sheet_records(sheet, 11, 5):
r = [date.strftime('%Y-%m-%d')] + r
assert len(r) is 6
csv_writer.writerow(r)
self.LOGGER.debug('''%s => %s''' % (r, dest_file))
fd.close()
def __source_v1_xls_to_csv_single(self, src_file, dest_dir, date):
if date >= datetime(2000, 9, 1).date():
return
book = xlrd.open_workbook(src_file)
sheet = book.sheet_by_index(0)
if date == datetime(2000, 5, 1).date():
header_last_row = 5
elif date <= datetime(1999, 7, 1).date():
header_last_row = 8
else:
header_last_row = 4
assert sheet.ncols in (17, 11)
assert sheet.cell(header_last_row, 0).value.strip() in ('Code & Name', 'CODE & NAME')
assert sheet.cell(header_last_row, 6).value.strip() in ('Code & Name', 'CODE & NAME')
assert sheet.cell(header_last_row, 4).value.strip() in ('PBR', 'PBR')
assert sheet.cell(header_last_row, 10).value.strip() in ('PBR', 'PBR')
dest_file = sourcing_twse.SourcingTwse.get_filename(self, dest_dir, date, 'csv')
fd = open(dest_file, 'w', newline='')
csv_writer = csv.writer(fd)
begin_row = header_last_row + 1
for r in self.__build_bad_sheet_records(sheet, 0, begin_row):
r = [date.strftime('%Y-%m-%d')] + r
assert len(r) is 6
csv_writer.writerow(r)
self.LOGGER.debug('''%s => %s''' % (r, dest_file))
for r in self.__build_bad_sheet_records(sheet, 6, begin_row):
r = [date.strftime('%Y-%m-%d')] + r
assert len(r) is 6
csv_writer.writerow(r)
self.LOGGER.debug('''%s => %s''' % (r, dest_file))
fd.close()
def __build_sheet_records(self, sheet, begin_col, begin_row):
for curr_row in range(begin_row, sheet.nrows):
r = sheet.row_values(curr_row)
first_cell = r[begin_col]
if r[begin_col] == '':
continue
if r[begin_col + 3] == '' and r[begin_col + 5] == '' \
and r[begin_col + 7] == '' and r[begin_col + 9] == '':
continue
if isinstance(first_cell, float):
first_cell = int(first_cell)
elif isinstance(first_cell, str):
first_cell = first_cell.replace(' ','')
yield [first_cell, r[begin_col + 3], r[begin_col + 5], r[begin_col + 7], r[begin_col + 9]]
def __build_bad_sheet_records(self, sheet, begin_col, begin_row):
for curr_row in range(begin_row, sheet.nrows):
r = sheet.row_values(curr_row)
stock_code = self.__fix_stock_code(r[begin_col])
latest_price = self.__fix_real_number(r[begin_col + 1])
per = self.__fix_real_number(r[begin_col + 2])
dividend_yield = self.__fix_real_number(r[begin_col + 3])
pbr = self.__fix_real_number(r[begin_col + 4])
if stock_code == '':
continue
if latest_price == '' and per == '' and dividend_yield == '' and pbr == '':
continue
yield [stock_code, latest_price, per, dividend_yield, pbr]
def __fix_stock_code(self, bad_stock_code):
space_removed = bad_stock_code.replace(' ','')
stock_code = space_removed[0:4]
if stock_code.isdigit(): # Quickly get possible stock_code
return stock_code
return space_removed
def __fix_real_number(self, bad_str):
if str_util.is_float(bad_str):
return float(bad_str)
assert str_util.is_str(bad_str)
splitted = bad_str.split()
for test_str in splitted:
if str_util.is_float(test_str):
return float(test_str)
return ''
./src/common/str_util.py
def is_float(test_str):
try:
float(test_str)
return True
except ValueError:
return False
def is_str(test_str):
try:
str(test_str)
return True
except ValueError:
return False
./src/common/date_util.py
import datetime
def get_last_month():
today = datetime.date.today()
first = datetime.date(day=1, month=today.month, year=today.year)
last_month = first - datetime.timedelta(days=1)
return datetime.date(day=1, month=last_month.month, year=last_month.year)
def get_this_month():
today = datetime.date.today()
return datetime.date(day=1, month=today.month, year=today.year)
def get_yesterday():
return datetime.date.today() - datetime.timedelta(days=1)
SQLite3 Schema:
create table if not exists MarketStatistics
(
creation_dt datetime default current_timestamp,
report_date datetime not null,
activity_date datetime not null,
report_type text not null,
total_trading_value real,
listed_co_number real,
capital_issued real,
total_listed_shares real,
market_capitalization real,
trading_volume real,
trading_value real,
trans_number real,
average_taiex real,
volume_turnover_rate real,
per real,
dividend_yield real,
pbr real,
trading_days int,
unique (report_date, activity_date, report_type) on conflict ignore
);
create table if not exists ListedCoStatistics
(
creation_dt datetime default current_timestamp,
report_date datetime not null,
stock_code text not null,
latest_price real,
per real,
yield real,
pbr real,
unique (report_date, stock_code) on conflict ignore
);
最後是 sourcing 操作界面,原來台塑也有天天都便宜的時候。
./source_market_statistics.py
import logging
import sys
import src.market_statistics.sourcing as sourcing
import src.common.logger as logger
import src.common.date_util as date_util
FIRST_DAY = '1999-01-01'
def source_all():
logger.config_root(level=logging.DEBUG)
last_month = str(date_util.get_last_month())
s = sourcing.Sourcing()
s.source(FIRST_DAY, last_month)
def source_last_month():
logger.config_root(level=logging.DEBUG)
last_month = str(date_util.get_last_month())
s = sourcing.Sourcing()
s.source(last_month, last_month)
def source_csv_to_sqlite_all():
logger.config_root(level=logging.DEBUG)
last_month = str(date_util.get_last_month())
s = sourcing.Sourcing()
s.init_dates(FIRST_DAY, last_month)
s.source_csv_to_sqlite(s.CSV_DIR, s.DB_FILE, s.SQL_INSERT)
def test():
logger.config_root(level=logging.DEBUG)
s = sourcing.Sourcing()
#s.source('1999-01-01', '2012-09-01')
#s.source('2012-09-01', '2012-09-01')
#s.source('2003-05-01', '2003-05-01')
def main():
source_last_month()
if __name__ == '__main__':
sys.exit(main())
./source_listed_co_statistics.py
import logging
import sys
import src.listed_co_statistics.sourcing as sourcing
import src.common.logger as logger
import src.common.date_util as date_util
FIRST_DAY = '1999-03-01'
def source_all():
logger.config_root(level=logging.DEBUG)
last_month = str(date_util.get_last_month())
s = sourcing.Sourcing()
s.source(FIRST_DAY, last_month)
def source_last_month():
logger.config_root(level=logging.DEBUG)
last_month = str(date_util.get_last_month())
s = sourcing.Sourcing()
s.source(last_month, last_month)
def source_csv_to_sqlite_all():
logger.config_root(level=logging.DEBUG)
last_month = str(date_util.get_last_month())
s = sourcing.Sourcing()
s.init_dates(FIRST_DAY, last_month)
s.source_csv_to_sqlite(s.CSV_DIR, s.DB_FILE, s.SQL_INSERT)
def test():
logger.config_root(level=logging.DEBUG)
s = sourcing.Sourcing()
#s.source('2000-09-01', '2000-09-01') # for the last report for 21 cols
#s.source('2000-08-01', '2000-08-01') # for the first report for dirty cols
#s.source('2012-09-01', '2012-09-01') # for this month
def main():
source_last_month()
if __name__ == '__main__':
sys.exit(main())
2012年10月16日 星期二
Source: Statistics of Securities Market
剛爬完所有 2000-10 到 2012-09 的資料,資料筆數沒有很多。
塞完資料庫,當然就玩點有趣的東西:
select
activity_date,
average_taiex,
pbr
from
(
select *, max(report_date) from MarketStatistics where report_type = 'monthly'
group by activity_date
)
order by activity_date
PBR 跟股價指數有強烈的正相關 ((這是廢話))
殖利率跟股價指數有些的負相關 ((這是廢話))
SQLite schema:
create table if not exists MarketStatistics
(
creation_dt datetime default current_timestamp,
report_date datetime not null,
activity_date datetime not null,
report_type text not null,
total_trading_value real,
listed_co_number real,
capital_issued real,
total_listed_shares real,
market_capitalization real,
trading_volume real,
trading_value real,
trans_number real,
average_taiex real,
volume_turnover_rate real,
per real,
dividend_yield real,
pbr real,
trading_days int,
unique (report_date, activity_date, report_type) on conflict ignore
);
Python source code:
# coding: big5
import csv
import logging
import os
import shutil
import sqlite3
import xlrd
from datetime import date
from datetime import datetime
from ..common import logger
class Sourcing():
def __init__(self):
self.LOGGER = logging.getLogger()
self.URL_TEMPLATE = '''http://www.twse.com.tw/ch/inc/download.php?l1=Securities+Trading+Monthly+Statistics&l2=Statistics+of+Securities+Market&url=/ch/statistics/download/02/001/%s_C02001.zip'''
self.DATES = []
self.ZIP_DIR = '''./dataset/market_statistics/zip/'''
self.XLS_DIR = '''./dataset/market_statistics/xls/'''
self.CSV_DIR = '''./dataset/market_statistics/csv/'''
self.DB_FILE = './db/stocktotal.db'
self.SQL_INSERT = '''insert or ignore into MarketStatistics(
report_date,
activity_date,
report_type,
total_trading_value,
listed_co_number,
capital_issued,
total_listed_shares,
market_capitalization,
trading_volume,
trading_value,
trans_number,
average_taiex,
volume_turnover_rate,
per,
dividend_yield,
pbr,
trading_days
) values(?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)'''
def source(self, begin_date, end_date):
self.init_dates(begin_date, end_date)
self.source_url_to_zip(self.ZIP_DIR)
self.source_zip_to_xls(self.ZIP_DIR, self.XLS_DIR)
self.source_xls_to_csv(self.XLS_DIR, self.CSV_DIR)
self.source_csv_to_sqlite(self.CSV_DIR, self.DB_FILE, self.SQL_INSERT)
def init_dates(self, begin_date, end_date):
begin = datetime.strptime(begin_date, '%Y-%m-%d')
end = datetime.strptime(end_date, '%Y-%m-%d')
monthly_begin = 12 * begin.year + begin.month - 1
monthly_end = 12 * end.year + end.month
for monthly in range(monthly_begin, monthly_end):
year, month = divmod(monthly, 12)
self.DATES.append(date(year, month + 1, 1))
def source_url_to_zip(self, dest_dir):
if not os.path.exists(dest_dir):
os.makedirs(dest_dir)
for date in self.DATES:
url = self.URL_TEMPLATE % date.strftime('%Y%m')
dest_file = os.path.join(dest_dir, date.strftime('%Y-%m') + '.zip')
self.__wget(url, dest_file)
def source_zip_to_xls(self, src_dir, dest_dir):
assert os.path.isdir(src_dir)
if not os.path.exists(dest_dir):
os.makedirs(dest_dir)
for date in self.DATES:
src_file = os.path.join(src_dir, date.strftime('%Y-%m') + '.zip')
dest_file = os.path.join(dest_dir, date.strftime('%Y-%m') + '.xls')
self.source_zip_to_xls_single(src_file, dest_dir, dest_file)
def source_zip_to_xls_single(self, src_file, dest_dir, dest_file):
assert os.path.isfile(src_file)
assert os.path.isdir(dest_dir)
sevenzip_output_dir = os.path.join(dest_dir, 'sevenzip_output_dir')
self.__sevenzip_extract(src_file, sevenzip_output_dir)
if not os.path.exists(sevenzip_output_dir):
self.LOGGER.info('''%s => Failure to extract''' % src_file)
return
file_list = os.listdir(sevenzip_output_dir)
assert len(file_list) is 1
sevenzip_output_file = os.path.join(sevenzip_output_dir, file_list[0])
shutil.copy(sevenzip_output_file, dest_file)
shutil.rmtree(sevenzip_output_dir)
def source_xls_to_csv(self, src_dir, dest_dir):
assert os.path.isdir(src_dir)
if not os.path.exists(dest_dir):
os.makedirs(dest_dir)
for date in reversed(self.DATES):
src_file = os.path.join(src_dir, date.strftime('%Y-%m') + '.xls')
self.source_xls_to_csv_single(src_file, dest_dir, date)
"""
CSV fields should contains:
Report Date
Activity Date
Report Type (monthly or yearly)
Total Trading Value of TWSE
No. of Listed Co.
Capital Issued
Total Listed Shares
Market Capitalization
Trading Volume
Trading Value
No. of Trans. (1,000)
TAIEX (Average)
Volume Turnover Rate (%)
P/E Ratio (PER)
Dividend Yield (%)
P/B Ratio (PBR)
Trading Days
"""
def source_xls_to_csv_single(self, src_file, dest_dir, date):
assert os.path.isfile(src_file)
assert os.path.isdir(dest_dir)
self.__source_v1_xls_to_csv_single(src_file, dest_dir, date)
self.__source_v2_xls_to_csv_single(src_file, dest_dir, date)
def __source_v1_xls_to_csv_single(self, src_file, dest_dir, date):
if date < datetime(2003, 6, 1).date():
return
book = xlrd.open_workbook(src_file)
sheet = book.sheet_by_index(0)
assert sheet.ncols is 15
assert sheet.cell(12, 14).value == 'Days'
assert sheet.cell(12, 0).value.strip() == 'Month'
dest_file = os.path.join(dest_dir, date.strftime('%Y-%m') + '.csv')
fd = open(dest_file, 'w', newline='')
csv_writer = csv.writer(fd)
for r in self.__build_sheet_records(sheet, 13):
r = [date.strftime('%Y-%m-%d')] + r
r = self.__remove_comment_mark(r)
assert len(r) is 17
csv_writer.writerow(r)
self.LOGGER.debug('''%s => %s''' % (r, dest_file))
fd.close()
def __source_v2_xls_to_csv_single(self, src_file, dest_dir, date):
if date >= datetime(2003, 6, 1).date() or date <= datetime(2000, 9, 1).date():
return
book = xlrd.open_workbook(src_file)
main_sheet = book.sheet_by_index(0)
assert main_sheet.ncols is 12
if date > datetime(2001, 6, 1).date():
assert main_sheet.cell(12, 0).value.strip() == 'Month'
elif date > datetime(2000, 9, 1).date():
assert main_sheet.cell(11, 0).value.strip() == 'Month'
assert main_sheet.cell(12, 0).value.strip() == ''
main_records = self.__build_sheet_records(main_sheet, 13)
rest_sheet = book.sheet_by_index(1)
assert rest_sheet.ncols is 13
assert rest_sheet.cell(10, 0).value.strip() == 'Month'
rest_records = self.__build_sheet_records(rest_sheet, 11)
assert len(main_records) == len(rest_records)
dest_file = os.path.join(dest_dir, date.strftime('%Y-%m') + '.csv')
fd = open(dest_file, 'w', newline='')
csv_writer = csv.writer(fd)
for i in range(len(main_records)):
assert len(main_records[i]) is 13
assert len(rest_records[i]) is 14
assert main_records[i][0] == rest_records[i][0]
assert main_records[i][1] == rest_records[i][1]
r = [date.strftime('%Y-%m-%d')] + \
main_records[i][:-2] + rest_records[i][2:6] + rest_records[i][-2:-1]
r = self.__remove_comment_mark(r)
assert len(r) is 17
csv_writer.writerow(r)
self.LOGGER.debug('''%s => %s''' % (r, dest_file))
fd.close()
def source_csv_to_sqlite(self, src_dir, dest_db, sql_insert):
assert os.path.isdir(src_dir)
assert os.path.isfile(dest_db)
for file in os.listdir(src_dir):
self.source_csv_to_sqlite_single(os.path.join(src_dir, file), dest_db, sql_insert)
def source_csv_to_sqlite_single(self, src_file, dest_db, sql_insert):
self.LOGGER.debug('''%s => %s''' % (src_file, dest_db))
fd = open(src_file, 'r')
csv_reader = csv.reader(fd)
conn = sqlite3.connect(dest_db)
cursor = conn.cursor()
for row in csv_reader:
cursor.execute(self.SQL_INSERT, row)
self.LOGGER.debug(row)
conn.commit()
cursor.close()
conn.close()
fd.close()
def __wget(self, url, dest_file):
wget = os.path.abspath('./src/thirdparty/wget/wget.exe')
assert os.path.isfile(wget)
wget_cmdline = '''%s -N \"%s\" --waitretry=3 -O \"%s\"''' % (wget, url, dest_file)
os.system(wget_cmdline)
def __sevenzip_extract(self, src_file, dest_dir):
sevenzip = os.path.abspath('./src/thirdparty/sevenzip/7z.exe')
assert os.path.isfile(sevenzip)
sevenzip_cmdline = '''%s e %s -y -o%s''' % (sevenzip, src_file, dest_dir)
os.system(sevenzip_cmdline)
def __build_sheet_records(self, sheet, begin_row):
rv = []
monthly_curr_year = ''
for curr_row in range(begin_row, sheet.nrows):
r = sheet.row_values(curr_row)
first_cell = r[0].strip()
if first_cell.startswith('註'): # Check footer.
break
if first_cell.endswith(')月'): # Ignore this year summary because it is partial.
continue
if first_cell.endswith(')'): # Check if yearly record. Example: 93(2004)
curr_date = '''%s-01-01''' % first_cell[first_cell.index('(')+1 : -1]
sheet_record = [curr_date, 'yearly'] + r[1:]
rv.append(sheet_record)
if first_cell.endswith('月'): # Check if monthly record. Example: 95年 1月
curr_month = 0
if '年' in first_cell:
monthly_curr_year = int(first_cell[:first_cell.index('年')]) + 1911
curr_month = int(first_cell[first_cell.index('年')+1 : first_cell.index('月')])
else:
curr_month = int(first_cell[:first_cell.index('月')])
curr_date = '''%s-%02d-01''' % (monthly_curr_year, curr_month)
sheet_record = [curr_date, 'monthly'] + r[1:]
rv.append(sheet_record)
return rv
def __remove_comment_mark(self, csv_record):
rv = csv_record[:3]
for i in range(3, len(csv_record)):
value = csv_record[i]
try:
float(value)
rv.append(value)
except ValueError:
fixed_value = value[value.rindex(' ')+ 1 :].replace(',', '')
float(fixed_value)
rv.append(fixed_value)
return rv
塞完資料庫,當然就玩點有趣的東西:
select
activity_date,
average_taiex,
pbr
from
(
select *, max(report_date) from MarketStatistics where report_type = 'monthly'
group by activity_date
)
order by activity_date
PBR 跟股價指數有強烈的正相關 ((這是廢話))
殖利率跟股價指數有些的負相關 ((這是廢話))
SQLite schema:
create table if not exists MarketStatistics
(
creation_dt datetime default current_timestamp,
report_date datetime not null,
activity_date datetime not null,
report_type text not null,
total_trading_value real,
listed_co_number real,
capital_issued real,
total_listed_shares real,
market_capitalization real,
trading_volume real,
trading_value real,
trans_number real,
average_taiex real,
volume_turnover_rate real,
per real,
dividend_yield real,
pbr real,
trading_days int,
unique (report_date, activity_date, report_type) on conflict ignore
);
Python source code:
# coding: big5
import csv
import logging
import os
import shutil
import sqlite3
import xlrd
from datetime import date
from datetime import datetime
from ..common import logger
class Sourcing():
def __init__(self):
self.LOGGER = logging.getLogger()
self.URL_TEMPLATE = '''http://www.twse.com.tw/ch/inc/download.php?l1=Securities+Trading+Monthly+Statistics&l2=Statistics+of+Securities+Market&url=/ch/statistics/download/02/001/%s_C02001.zip'''
self.DATES = []
self.ZIP_DIR = '''./dataset/market_statistics/zip/'''
self.XLS_DIR = '''./dataset/market_statistics/xls/'''
self.CSV_DIR = '''./dataset/market_statistics/csv/'''
self.DB_FILE = './db/stocktotal.db'
self.SQL_INSERT = '''insert or ignore into MarketStatistics(
report_date,
activity_date,
report_type,
total_trading_value,
listed_co_number,
capital_issued,
total_listed_shares,
market_capitalization,
trading_volume,
trading_value,
trans_number,
average_taiex,
volume_turnover_rate,
per,
dividend_yield,
pbr,
trading_days
) values(?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)'''
def source(self, begin_date, end_date):
self.init_dates(begin_date, end_date)
self.source_url_to_zip(self.ZIP_DIR)
self.source_zip_to_xls(self.ZIP_DIR, self.XLS_DIR)
self.source_xls_to_csv(self.XLS_DIR, self.CSV_DIR)
self.source_csv_to_sqlite(self.CSV_DIR, self.DB_FILE, self.SQL_INSERT)
def init_dates(self, begin_date, end_date):
begin = datetime.strptime(begin_date, '%Y-%m-%d')
end = datetime.strptime(end_date, '%Y-%m-%d')
monthly_begin = 12 * begin.year + begin.month - 1
monthly_end = 12 * end.year + end.month
for monthly in range(monthly_begin, monthly_end):
year, month = divmod(monthly, 12)
self.DATES.append(date(year, month + 1, 1))
def source_url_to_zip(self, dest_dir):
if not os.path.exists(dest_dir):
os.makedirs(dest_dir)
for date in self.DATES:
url = self.URL_TEMPLATE % date.strftime('%Y%m')
dest_file = os.path.join(dest_dir, date.strftime('%Y-%m') + '.zip')
self.__wget(url, dest_file)
def source_zip_to_xls(self, src_dir, dest_dir):
assert os.path.isdir(src_dir)
if not os.path.exists(dest_dir):
os.makedirs(dest_dir)
for date in self.DATES:
src_file = os.path.join(src_dir, date.strftime('%Y-%m') + '.zip')
dest_file = os.path.join(dest_dir, date.strftime('%Y-%m') + '.xls')
self.source_zip_to_xls_single(src_file, dest_dir, dest_file)
def source_zip_to_xls_single(self, src_file, dest_dir, dest_file):
assert os.path.isfile(src_file)
assert os.path.isdir(dest_dir)
sevenzip_output_dir = os.path.join(dest_dir, 'sevenzip_output_dir')
self.__sevenzip_extract(src_file, sevenzip_output_dir)
if not os.path.exists(sevenzip_output_dir):
self.LOGGER.info('''%s => Failure to extract''' % src_file)
return
file_list = os.listdir(sevenzip_output_dir)
assert len(file_list) is 1
sevenzip_output_file = os.path.join(sevenzip_output_dir, file_list[0])
shutil.copy(sevenzip_output_file, dest_file)
shutil.rmtree(sevenzip_output_dir)
def source_xls_to_csv(self, src_dir, dest_dir):
assert os.path.isdir(src_dir)
if not os.path.exists(dest_dir):
os.makedirs(dest_dir)
for date in reversed(self.DATES):
src_file = os.path.join(src_dir, date.strftime('%Y-%m') + '.xls')
self.source_xls_to_csv_single(src_file, dest_dir, date)
"""
CSV fields should contains:
Report Date
Activity Date
Report Type (monthly or yearly)
Total Trading Value of TWSE
No. of Listed Co.
Capital Issued
Total Listed Shares
Market Capitalization
Trading Volume
Trading Value
No. of Trans. (1,000)
TAIEX (Average)
Volume Turnover Rate (%)
P/E Ratio (PER)
Dividend Yield (%)
P/B Ratio (PBR)
Trading Days
"""
def source_xls_to_csv_single(self, src_file, dest_dir, date):
assert os.path.isfile(src_file)
assert os.path.isdir(dest_dir)
self.__source_v1_xls_to_csv_single(src_file, dest_dir, date)
self.__source_v2_xls_to_csv_single(src_file, dest_dir, date)
def __source_v1_xls_to_csv_single(self, src_file, dest_dir, date):
if date < datetime(2003, 6, 1).date():
return
book = xlrd.open_workbook(src_file)
sheet = book.sheet_by_index(0)
assert sheet.ncols is 15
assert sheet.cell(12, 14).value == 'Days'
assert sheet.cell(12, 0).value.strip() == 'Month'
dest_file = os.path.join(dest_dir, date.strftime('%Y-%m') + '.csv')
fd = open(dest_file, 'w', newline='')
csv_writer = csv.writer(fd)
for r in self.__build_sheet_records(sheet, 13):
r = [date.strftime('%Y-%m-%d')] + r
r = self.__remove_comment_mark(r)
assert len(r) is 17
csv_writer.writerow(r)
self.LOGGER.debug('''%s => %s''' % (r, dest_file))
fd.close()
def __source_v2_xls_to_csv_single(self, src_file, dest_dir, date):
if date >= datetime(2003, 6, 1).date() or date <= datetime(2000, 9, 1).date():
return
book = xlrd.open_workbook(src_file)
main_sheet = book.sheet_by_index(0)
assert main_sheet.ncols is 12
if date > datetime(2001, 6, 1).date():
assert main_sheet.cell(12, 0).value.strip() == 'Month'
elif date > datetime(2000, 9, 1).date():
assert main_sheet.cell(11, 0).value.strip() == 'Month'
assert main_sheet.cell(12, 0).value.strip() == ''
main_records = self.__build_sheet_records(main_sheet, 13)
rest_sheet = book.sheet_by_index(1)
assert rest_sheet.ncols is 13
assert rest_sheet.cell(10, 0).value.strip() == 'Month'
rest_records = self.__build_sheet_records(rest_sheet, 11)
assert len(main_records) == len(rest_records)
dest_file = os.path.join(dest_dir, date.strftime('%Y-%m') + '.csv')
fd = open(dest_file, 'w', newline='')
csv_writer = csv.writer(fd)
for i in range(len(main_records)):
assert len(main_records[i]) is 13
assert len(rest_records[i]) is 14
assert main_records[i][0] == rest_records[i][0]
assert main_records[i][1] == rest_records[i][1]
r = [date.strftime('%Y-%m-%d')] + \
main_records[i][:-2] + rest_records[i][2:6] + rest_records[i][-2:-1]
r = self.__remove_comment_mark(r)
assert len(r) is 17
csv_writer.writerow(r)
self.LOGGER.debug('''%s => %s''' % (r, dest_file))
fd.close()
def source_csv_to_sqlite(self, src_dir, dest_db, sql_insert):
assert os.path.isdir(src_dir)
assert os.path.isfile(dest_db)
for file in os.listdir(src_dir):
self.source_csv_to_sqlite_single(os.path.join(src_dir, file), dest_db, sql_insert)
def source_csv_to_sqlite_single(self, src_file, dest_db, sql_insert):
self.LOGGER.debug('''%s => %s''' % (src_file, dest_db))
fd = open(src_file, 'r')
csv_reader = csv.reader(fd)
conn = sqlite3.connect(dest_db)
cursor = conn.cursor()
for row in csv_reader:
cursor.execute(self.SQL_INSERT, row)
self.LOGGER.debug(row)
conn.commit()
cursor.close()
conn.close()
fd.close()
def __wget(self, url, dest_file):
wget = os.path.abspath('./src/thirdparty/wget/wget.exe')
assert os.path.isfile(wget)
wget_cmdline = '''%s -N \"%s\" --waitretry=3 -O \"%s\"''' % (wget, url, dest_file)
os.system(wget_cmdline)
def __sevenzip_extract(self, src_file, dest_dir):
sevenzip = os.path.abspath('./src/thirdparty/sevenzip/7z.exe')
assert os.path.isfile(sevenzip)
sevenzip_cmdline = '''%s e %s -y -o%s''' % (sevenzip, src_file, dest_dir)
os.system(sevenzip_cmdline)
def __build_sheet_records(self, sheet, begin_row):
rv = []
monthly_curr_year = ''
for curr_row in range(begin_row, sheet.nrows):
r = sheet.row_values(curr_row)
first_cell = r[0].strip()
if first_cell.startswith('註'): # Check footer.
break
if first_cell.endswith(')月'): # Ignore this year summary because it is partial.
continue
if first_cell.endswith(')'): # Check if yearly record. Example: 93(2004)
curr_date = '''%s-01-01''' % first_cell[first_cell.index('(')+1 : -1]
sheet_record = [curr_date, 'yearly'] + r[1:]
rv.append(sheet_record)
if first_cell.endswith('月'): # Check if monthly record. Example: 95年 1月
curr_month = 0
if '年' in first_cell:
monthly_curr_year = int(first_cell[:first_cell.index('年')]) + 1911
curr_month = int(first_cell[first_cell.index('年')+1 : first_cell.index('月')])
else:
curr_month = int(first_cell[:first_cell.index('月')])
curr_date = '''%s-%02d-01''' % (monthly_curr_year, curr_month)
sheet_record = [curr_date, 'monthly'] + r[1:]
rv.append(sheet_record)
return rv
def __remove_comment_mark(self, csv_record):
rv = csv_record[:3]
for i in range(3, len(csv_record)):
value = csv_record[i]
try:
float(value)
rv.append(value)
except ValueError:
fixed_value = value[value.rindex(' ')+ 1 :].replace(',', '')
float(fixed_value)
rv.append(fixed_value)
return rv
落漆的 System.MissingMethodException
System.MissingMethodException: Method not found: 'Boolean System.Threading.WaitHandle.WaitOne(Int32)'.
那當初幹麻讓我 compile 過?!
因此問題出在 .NET Framework 版本,真的很鳥。
Upgrade to 3.5 SP1,解決。
真掉漆。
2012年10月15日 星期一
《西遊記》前五十回
怪越打越強。
王下七武海鷹眼說,王者最厲害的功夫不是武力,而是化敵為友的能力,這個才可怕。魯夫連打都不用打,女帝就變自己人了 ((羞))。孫悟空也是。以現在的術語來說,就是 CALL OUT KING,唐三藏是談判籌碼,豬八戒是個丑角搞笑用的,沙悟淨是萬年留守人員。當然啦,孫悟空對唐三藏也是有情有義。就是個搞笑小說,害我笑到快岔氣了。
略過不題。說點三大法人交易記錄。TWSE 對我很好心,每次抓檔都不用故意等幾秒鐘,因此我一下就抓完了。
Schema:
create table if not exists TradingSummary
(
creation_dt datetime default current_timestamp,
trading_date datetime not null,
item text not null,
buy real,
sell real,
diff real,
unique (trading_date, item) on conflict ignore
);
Source codes:
import csv
import logging
import os
import sqlite3
from ..common import logger
class Sourcing():
def __init__(self):
self.LOGGER = logging.getLogger()
self.URL_TEMPLATE = '''http://www.twse.com.tw/ch/trading/fund/BFI82U/BFI82U_print.php?begin_date=%s&end_date=&report_type=day&language=ch&save=csv'''
self.DATES = []
self.CSV_DIR = '''./dataset/trading_summary/csv/'''
self.DB_FILE = './db/stocktotal.db'
self.SQL_INSERT = '''insert or ignore into
TradingSummary(trading_date, item, buy, sell, diff) values(?, ?, ?, ?, ?)'''
def source(self, begin_date, end_date):
self.init_dates(begin_date, end_date)
#self.source_url_to_csv(self.CSV_DIR)
self.source_csv_to_sqlite(self.CSV_DIR, self.DB_FILE, self.SQL_INSERT)
def init_dates(self, begin_date, end_date):
from datetime import date
from datetime import datetime
from datetime import timedelta
begin = datetime.strptime(begin_date, '%Y-%m-%d')
end = datetime.strptime(end_date, '%Y-%m-%d')
self.DATES = [begin + timedelta(n) for n in range(int((end - begin).days + 1))]
def source_url_to_csv(self, dest_dir):
if not os.path.exists(dest_dir):
os.makedirs(dest_dir)
for date in self.DATES:
url = self.URL_TEMPLATE % date.strftime('%Y%m%d')
dest_file = os.path.join(dest_dir, date.strftime('%Y-%m-%d'))
self.__wget(url, dest_file)
def source_csv_to_sqlite(self, src_dir, dest_db, sql_insert):
assert os.path.isfile(dest_db)
for date in self.DATES:
src_file = os.path.join(src_dir, date.strftime('%Y-%m-%d'))
self.source_csv_to_sqlite_single(src_file, dest_db, sql_insert)
def source_csv_to_sqlite_single(self, src_file, dest_db, sql_insert):
self.LOGGER.debug('''%s => %s''' % (src_file, dest_db))
csv_reader = csv.reader(open(src_file, 'r'))
rows = [_ for _ in csv_reader]
if len(rows) is 1:
self.LOGGER.info('''%s => No record''' % src_file)
return
elif len(rows) is not 6:
self.LOGGER.info('''%s => Error''' % src_file)
return
conn = sqlite3.connect(dest_db)
cursor = conn.cursor()
for n in range(2, 6):
r = self.__build_db_record(src_file, rows[n])
cursor.execute(self.SQL_INSERT, r)
self.LOGGER.debug(r)
conn.commit()
cursor.close()
conn.close()
def __wget(self, url, dest_file):
wget = os.path.abspath('./src/thirdparty/wget/wget.exe')
assert os.path.isfile(wget)
wget_cmdline = '''%s -N \"%s\" --waitretry=3 -O \"%s\"''' % (wget, url, dest_file)
os.system(wget_cmdline)
def __build_db_record(self, src_file, row):
trading_date = os.path.basename(src_file)
item = row[0]
buy = row[1].replace(',','')
sell = row[2].replace(',','')
diff = row[3].replace(',','')
return [trading_date, item, buy, sell, diff]
2012年10月14日 星期日
Stocktotal Database 之二
接著,想查最近各家股票的自有資本比率 = 股東權益/資產。
研究許久,用 inner join 兜出想要的 SQL statement:
然後再重做一次,把資料清掉,再重新塞一千多萬筆資料。喵的,哪有那麼多時間,直接下:
因此當初用 text 存數字就是他媽的錯誤設計。改 schema 吧。
該用 int 或者是 real 呢?因為損益表有類似 something per share 的概念,因此有小數,只好全用 real。接著就是改 schema。Firefox plugin 的 SQLite Manager 有提供快速更改 column data type 的功能 ((!!))
((
Stocktotal 總算恢復正常了。
現在我想找近十年自由現金流量為正的公司中,最近一次自有資本比率超過五成的有那些?
ET: 3186ms,時間很漂亮 ((index => 用空間換時間))。38 家公司如下:
然後我發現還欠許多資料,每月營收、股東權利變動表、股價、市場總本益比、三大法人進出等等。
研究許久,用 inner join 兜出想要的 SQL statement:
select然後碰到麻煩了,當初 schema 存 item number 的 data type 是 text,結果發現 field 值若為 '17,458,850.00',SQLite 會把它 cast 為 17 ((int)),這不是我們要的。倘若值為 '17458850.00',那麼 SQLite 會把它正確轉成 17458850。這樣一來,原先 sourcing_base.py 就要略為修改
E.stock_code,
E.activity_date,
E.number as Equity,
A.number as Assets,
E.number / A.number as Ratio
from BalanceSheet as E
inner join
BalanceSheet as A
on E.stock_code = A.stock_code
and E.activity_date = A.activity_date
and E.item = '股東權益總計'
and A.item = '資產總計'
and E.report_type = 'C'
and A.report_type = 'C'
and E.activity_date = '2012-06-30'
cursor.execute(self.SQL_INSERT, (..., row[1].replace(',', '')))
update BalanceSheet set number = replace(number, ',', '');重新做上述 SQL command,好慢,只好建 index:
create index IX_BALANCE_SHEET_ITEM on BalanceSheet(item);好多了。此外,會計對負數表示法有兩種,同樣是負一,可以寫 -1,也可以寫 (1)。
因此當初用 text 存數字就是他媽的錯誤設計。改 schema 吧。
該用 int 或者是 real 呢?因為損益表有類似 something per share 的概念,因此有小數,只好全用 real。接著就是改 schema。Firefox plugin 的 SQLite Manager 有提供快速更改 column data type 的功能 ((!!))
ALTER TABLE "main"."BalanceSheet" RENAME TO "oXHFcGcd04oXHFcGcd04_BalanceSheet";很開心的按下去,結果悲劇了:index 掉了。喵的。只好重建:
CREATE TABLE "main"."BalanceSheet" (
"creation_dt" datetime DEFAULT (current_timestamp) ,
"stock_code" text NOT NULL ,
"report_type" character(1) NOT NULL ,
"report_date" datetime NOT NULL ,
"activity_date" datetime NOT NULL ,
"item" text NOT NULL ,
"number" real,
"revision" int DEFAULT (0) );
INSERT INTO "main"."BalanceSheet" SELECT ... FROM "main"."oXHFcGcd04oXHFcGcd04_BalanceSheet";
DROP TABLE "main"."oXHFcGcd04oXHFcGcd04_BalanceSheet";
create index IX_BALANCE_SHEET_ITEM on BalanceSheet(item);前面 index 很好理解,後面 unique index 就不好理解了。但也沒什麼,當初 create table 有設 unique 限制,SQLite 會建立相對應的 unique index。((其實也不難想像,如果 unique 不建立 index,那每次 insert 資料,都對 table scan 一次,怎麼可能這樣做呢?))
create unique index UX_BALANCE_SHEET_RECORD on
BalanceSheet(stock_code, report_type, report_date, activity_date, item, revision);
((
將來可以先把 index SQL command 先撈出來: SELECT sql FROM sqlite_master where type = 'index' and tbl_name = 'BalanceSheet'))
Stocktotal 總算恢復正常了。
現在我想找近十年自由現金流量為正的公司中,最近一次自有資本比率超過五成的有那些?
select * from
(
select
E.stock_code,
E.activity_date,
E.number as Equity,
A.number as Assets,
E.number / A.number as Ratio
from BalanceSheet as E
inner join
BalanceSheet as A
on E.stock_code = A.stock_code
and E.activity_date = A.activity_date
and E.item = '股東權益總計'
and A.item = '資產總計'
and E.report_type = 'C'
and A.report_type = 'C'
and E.activity_date = '2012-06-30'
and E.stock_code in
(
select stock_code from
(
select stock_code, activity_date, sum(number) as sum from CashFlowStmt
where report_type = 'C'
and strftime('%m-%d', activity_date) = '12-31'
and strftime('%Y', activity_date) > '2001'
and item in ('Operating', 'Financing')
group by activity_date, stock_code
)
where sum > 0
group by stock_code
having count(*) >= 10
)
) where Ratio > 0.5
ET: 3186ms,時間很漂亮 ((index => 用空間換時間))。38 家公司如下:
1301, 1303, 1319, 1434, 1460, 1525, 1730, 1734, 1802, 1905,
2106, 2323, 2325, 2330, 2340, 2367, 2387, 2428, 2460, 2474,
2478, 2492, 3031, 3060, 3311, 5706, 6155, 6202, 6271, 6283,
8103, 8210, 9905, 9917, 9925, 9938, 9939, 9943
然後我發現還欠許多資料,每月營收、股東權利變動表、股價、市場總本益比、三大法人進出等等。
2012年10月13日 星期六
Stocktotal Database 之一
Schema:
drop table if exists BalanceSheet;
drop table if exists IncomeStmt;
drop table if exists CashFlowStmt;
drop table if exists StockCode;
create table if not exists BalanceSheet
(
creation_dt datetime default current_timestamp,
stock_code text not null,
report_type character(1) not null,
report_date datetime not null,
activity_date datetime not null,
item text not null,
number text,
revision int default 0,
unique (stock_code, report_type, report_date, activity_date, item, revision) on conflict ignore
);
create table if not exists IncomeStmt
(
creation_dt datetime default current_timestamp,
stock_code text not null,
report_type character(1) not null,
report_date datetime not null,
activity_date datetime not null,
item text not null,
number text,
revision int default 0,
unique (stock_code, report_type, report_date, activity_date, item, revision) on conflict ignore
);
create table if not exists CashFlowStmt
(
creation_dt datetime default current_timestamp,
stock_code text not null,
report_type character(1) not null,
report_date datetime not null,
activity_date datetime not null,
item text not null,
number text,
revision int default 0,
unique (stock_code, report_type, report_date, activity_date, item, revision) on conflict ignore
);
create table if not exists StockCode
(
creation_dt datetime default current_timestamp,
code text unique,
name text unique,
isin_code text unique,
listing_date datetime,
market_category text,
industry_category text,
cfi_code text
);
還沒為查詢建 index。因為 SQLite 沒有 stored procedures 可以用,所以也沒有建立 stored procedures 之必要。((將來考慮用 PostgreSQL 之類的 open source database solution))
研究:
SQLite 會不會爆掉?想太多!
select count(*) from BalanceSheet -- 7518726
select count(*) from IncomeStmt -- 3561990
select count(*) from CashFlowStmt -- 276730
參考:http://www.sqlite.org/limits.html
Maximum Number Of Rows In A Table
The theoretical maximum number of rows in a table is 2^64. This limit is unreachable since the maximum database size of 14 terabytes will be reached first. A 14 terabytes database can hold no more than approximately 1e+13 rows, and then only if there are no indices and if each row contains very little data.
有意思的查詢:近十年,哪些股票自由現金流量表 (free cash flow) 皆為正?
select stock_code, count(*) as positive_fcf_count from
(
select stock_code, activity_date, sum(number) as sum from CashFlowStmt
where report_type = 'C'
and strftime('%m-%d', activity_date) = '12-31'
and strftime('%Y', activity_date) > '2001'
and item in ('Operating', 'Financing')
group by activity_date, stock_code
)
where sum > 0
group by stock_code
having positive_fcf_count >= 10
特別留意 SQLite 對 date/time 的處理方式,請參考 http://www.sqlite.org/lang_datefunc.html。許多 MSSQL 常用的 date/time functions,SQLite 也有其對應 functions。個人認為更直覺。
答案:共 92 家。((未驗算,1301果然在榜單))
2012年10月11日 星期四
Git - About Version Control
Local Version Control Systems
Many people’s version-control method of choice is to copy files into another directory (perhaps a time-stamped directory, if they’re clever). This approach is very common because it is so simple, but it is also incredibly error prone.
((常常絞在一起))
Centralized Version Control Systems
The next major issue that people encounter is that they need to collaborate with developers on other systems. To deal with this problem, Centralized Version Control Systems (CVCSs) were developed.
((Perforce 屬於這個很難用的系列))
Distributed Version Control Systems
This is where Distributed Version Control Systems (DVCSs) step in. In a DVCS (such as Git, Mercurial, Bazaar or Darcs), clients don’t just check out the latest snapshot of the files: they fully mirror the repository.
((這個比較像實際應用,也因此個人想架一套 Git 來存 source codes 跟 dataset))
訂閱:
文章 (Atom)