Risk and return for B3

One of the subjects that I teach in my undergraduate finance class is the relationship between risk and expected returns. In short, the riskier the investment, more returns should be expected by the investor. It is not a difficult argument to make. All that you need to understand is to remember that people are not naive in financial markets. Whenever they make a big gamble, the rewards should also be large. Rational investors, on theory, would not invest in risky stocks that are likelly to yield low returns. Going further, one the arguments I make to support this idea is looking at historical data.

New package: GetBCBData

The Central Bank of Brazil (BCB) offers access to the SGS system (sistema gerenciador de series temporais) with a official API available here. Over time, I find myself using more and more of the available datasets in my regular research and studies. Last weekend I decided to write my own API package that would make my life (and others) a lot easier. Package GetBCBData can fetch data efficiently and rapidly: Use of a caching system with package memoise to speed up repeated requests of data; Users can utilize all cores of the machine (parallel computing) when fetching a large batch of time series; Allows the choice for format output: long (row oriented, tidy data) or wide (column oriented) Error handling internally.

BatchGetSymbols is now parallel!

BatchGetSymbols is my most downloaded package by any count. Computation time, however, has always been an issue. While downloading data for 10 or less stocks is fine, doing it for a large ammount of tickers, say the SP500 composition, gets very boring. I’m glad to report that time is no longer an issue. Today I implemented a parallel option for BatchGetSymbols. If you have a high number of cores in your computer, you can seriously speep up the importation process. Importing SP500 compositition, over 500 stocks, is a breeze.

Can you turn 1,500 R$ into 1,000,430 R$ by investing in the stock market?

In the last few weeks we’ve seen a great deal of controversy in Brazil regarding financial investments. Too keep it short, Empiricus, an ad-based company that massively sells online courses and subscriptions, posted a YouTube ad where a young girl, Bettina, says the following: Hi, I'm Bettina, I am 22 years old and, starting with R$ 1,500, I now own R$ 1,042,000 of accumulated wealth. She later explains that she earned the money by investing in the stock market over three years. For my international audience, the proposed investment is equivalent of turning $394 into $263,169 over a three year period.

(2/2) Book promotion (paperback edition) - "Processing and Analyzing Financial Data with R"

I received many messages regarding my book promotion (see previous post ). I’ll use this post to answer the most frequent questions:  Does the paperback edition have a discount? No. The price drop is only valid for the ebook edition but not by choice. Unfortunately, Amazon does not let me do countdown promotions for the paperback edition. So, in favor of those, like myself, that like the smell of a fresh book page, I manually dropped the price of the paperback to 17.99 USD (it was 24.99 USD). Printings costs are heavy, which is why I can’t go all the way to a 50% discount.

Book promotion - "Processing and Analyzing Financial Data with R"

I recently did a book promotion for my R book in portuguese and it was a big sucess! My english book is now being sold with the same promotion. You can purchase it with a 50% discount if you buy it on the 10th day of march. See it here. The discount will be valid throughout the week, with daily price increases. If you want to learn more about R and its use in Finance and Economics, this book is a great opportunity. You can find more details about the book, including datasets, R code and all that good stuff here.

GetDFPData Ver 1.4

I just released a major update to package GetDFPData. Here are the main changes: Naming conventions for caching system are improved so that it reflects different versions of FRE and DFP files. This means the old caching system no longer works. If you have built yourself your own cache folder with many companies, do clean up the cache by deleting all folders. Run your code again and it will rebuild all files. Unfortinatelly this is a “brute force”, but necessary step. The code and data is now explicit about the version of downloaded files.

Looking back at 2018 and plans for 2019

At the end of every year I plan to write about the highlight of the current year and set plans for the future. First, let’s talk about my work in 2018. Highlights of 2018 Research wise, my scientometrics paper Is predatory publishing a real threat? Evidence from a large database study was featured in many news outlets. Its altmetric page is doing great, with over 1100 downloads and featured at top 5% of all research output measured by altmetric. This is, by far, the most impactful research piece I ever wrote.

New blog site: From Jekyll to Hugo

I while ago I wrote about purchasing my own webserver in digital ocean and hosting my shinny applications. Last week I finally got some time to migrate my blog from Github to my new domain, www.msperlin.com. While doing that, I also decided to change the technology behind making the blog, from Jekyll to Hugo. Here are my reasons. Jekyll is great for making simple static sites, specially with this template from Dean Attali. It was easy to set it up and host it in Github. My problems with this configuration are:

Some Useful Tricks in RStudio

I’ve been using Rstudio for a long time and I got some tricks to share. These are simple and useful commands and shortcuts that really help the productivity of my students. If you got a suggestion of trick, use the comment section and I’ll add it in this post. Package rstudioapi When using Rstudio, package rstudioapi gives you lots of information about your session. The most useful one is the script location. You can use it to automatically change the working folder to where you have the file locally saved.