What I learn from Practical Binary Analysis as a non-reverse-engineering engineer?

I spent the past two weeks in reading Practical Binary Analysis. Since I am not a professional reverse engineer, I glossed over the “Part III: Advanced Binary Analysis”, so I only read half the book. Even though, I still get a big gain:

(1) Know better of ELF file. On *nix Operating system, ELF file is everywhere: executable file, object file, shared library and coredump. “Chapter 2: The ELF format” gives me a clear explanation of the composition of ELF. E.g., I know why some functions have “@plt” suffix when using gdb to debug it.

(2) Master many tricks aboutĀ GNU Binutils. GNU Binutils is a toolbox which provides versatile command line programs to analyze ELF files. Literally it relies heavily on BFD library. I also get some sense about how to use BFD library to tweak ELF files.

(3) “Appendix A: A crash course on X86 assembly” is a good tutorial for refreshing X86 architecture and assembly language.

(4) Others: E.g., I understand how to use LD_PRELOAD environmental variable and dynamic linking functions to manipulate shared library.

All in all, if you are working on *nix (although this book is based on Linux, I think most knowledge are also applicable to other *nix), you should try to read this book. I promise it is not a waste of time and you can always learn something, believe me!

What you need may be “pipeline +Unix commands” only

I came across Taco Bell Programming recently, and think this article is worthy to read for every software engineer. The post mentions a scenario which you may consider to use Hadoop to solve but actually xargs may be a simpler and better choice. This reminds me a similar experience: last year a client wanted me to process a data file which has 5 million records. After some investigations, no novel technologies, a concise awk script (less than 10 lines) worked like a charm! What surprised me more is that awk is just a single-thread program, no nifty concurrency involved.

The IT field never lacks “new” technologies: cloud computing, big data, high concurrency, etc. However, the thinkings behind these “fancy” words may date back to the era when Unix arose. Unix command line tools are invaluable treasure. In many cases, picking the right components and using pipeline to glue them can satisfy your requirement perfectly. So spending some time in reviewing Unixcommand line manual instead of chasing state-of-the-art techniques exhaustedly, you may gain more.

BTW, if your data set can be disposed by an awk script, it should not be called “big data”.

The “sophisticated” modern software engineer interview process

About 10 years ago, the interview process of software engineer is pretty simple and straightforward: if you meet the condition, you will be invited on-site. The interviewer will ask you some questions about previous or current projects, computer science basic knowledge, and so on. If you are a fresh graduate, maybe there is a paper test about algorithm. Then if the interviewer think you are an appropriate candidate, you will enter the next round, mostly final round interview. This round always involves with R&D Director/HR, little or no technical discussion, only for salary and to see whether you can fit the culture of the team. Generally speaking, the whole process only lasts forĀ 1 week and is comprised of 1 ~ 2 rounds. no-nonsense!

If you want to know contemporary interview process, please check following capture of a job description, and I think it is a good represent:

Currently, before you reach on-site, you should have already passed 2 ~ 3 rounds of interview. Usually, the phone screening will be the first round, and HR will get a rough knowledge of your background. Sometimes HR also will ask you some technical questions though he/she is not technical-orientated. Then HR will notify there is a coding/homework test which you should finish by a deadline. Sometimes, the phone screening can be omitted, and a mail which notifies you need finish coding/homework test comes to your mail-box directly. It gives me (or maybe you) an impression that the persons in company are very busy, and there is no time to talk crap with interviewees. You apply for job, not job applies for you, Correct? So you should finish some work to prove you are qualified to talk to the company. Alas! I once got a homework test whose document is 7 pages! Yes, 7 pages! Besides coding, I also need to write a detailed test plan. The whole task costed me 30 hours, and I even doubt the company just outsourced its work to a free labour. Anyway, if you pass this round, you can hear the voice or see some person at least; if not, you will receive a rejection letter or no any response. The game is over, and you can’t even talk one word with the company.

In some cases, there is an extra coding interview before on-site. You passed online coding interview just now, but this time you will share a screen with interviewer to test your “pair-programming” ability. During this process, you should interact with interviewer actively: pose something, confirm something, etc. I don’t know whether this is the essence of “pair-programming”.

If you arrive this stage, congratulations! You can be invited on-site. There may be another 2 ~ 4 rounds of interview: every interview will cost 1 ~ 2 hours., and the content is nothing more than following 4 categories:
a) Still coding test (e.g., a dynamic programming problem);
b) System design (e.g., how to design a car-parking lot?);
c) Computer science basic knowledge (e.g., the difference between TCP and UDP?) ;
d) You previous/current projects (e.g., what is your current working field?).
There seems no standard of the interview, and you don’t know you should answer how many questions correctly to guarantee you can enter the next round, especially many problems are open-minded.

Based on previous description, you can see to get a job offer today, you will endure 4 ~ 6 rounds of interview, and nearly ~10 hours (this doesn’t count the time you spend on homework or prepare for online coding) in total. The whole process can last for 1 ~ 2 months. I can’t say this method is correct or not, but it indeed boosts the time cost of candidate at any rate. Regarding the companies: is it really meaningful for so many rounds? What result do you expect for every round? The candidate who passes all rounds is in truth a right person? If you are a owner of a company and can get specific answers for above questions, I think you can know whether this interview method is suitable or not for your corporation.

BTW, I really like following interview process:

The byproducts of reading OpenBSD netcat code

When I took part in a training last year, I heard about netcat for the first time. During that class, the tutor showed some hacks and tricks of using netcat which appealed to me and motivated me to learn the guts of it. Fortunately, in the past 2 months, I was not so busy that I can spend my spare time to dive into OpenBSD‘s netcat source code, and got abundant byproducts during this process.

(1) Brush up socket programming. I wrote my first network application more than 10 years ago, and always think the socket APIs are marvelous. Just ~10 functions (socket, bind, listen, accept…) with some IO multiplexing buddies (select, poll, epoll…) connect the whole world, wonderful! From that time, I developed a habit that is when touching a new programming language, network programming is an essential exercise. Even though I don’t write socket related code now, reading netcat socket code indeed refresh my knowledge and teach me new stuff.

(2) Write a tutorial about netcat. I am mediocre programmer and will forget things when I don’t use it for a long time. So I just take notes of what I think is useful. IMHO, this “tutorial” doesn’t really mean teach others something, but just a journal which I can refer when I need in the future.

(3) Submit patches to netcat. During reading code, I also found bugs and some enhancements. Though trivial contributions toOpenBSD, I am still happy and enjoy it.

(4) Implement a C++ encapsulation of libtls. OpenBSD‘s netcat supports tls/ssl connection, but it needs you take full care of resource management (memory, socket, etc), otherwise a small mistake can lead to resource leak which is fatal for long-live applications (In fact, the two bugs I reported to OpenBSD are all related resource leak). Therefore I develop a simple C++ library which wraps the libtls and hope it can free developer from this troublesome problem and put more energy in application logic part.

Long story to short, reading classical source code is a rewarding process, and you can consider to try it yourself.

The Dilemma brought by “outdated technology”

This week, I came across an interesting post: Do You Know Cobol? If So, There Might Be a Job for You. The general idea is many big finance companies still use the systems that are developed in Cobol, which is an ancient programming language. Currently, there are few people who master Cobol, and even worse, most of them will or already retire. IMHO, this article gives an good example about companies and personal engineers’ dilemma brought by “outdated technology”.

Let me tell two stories first:
(1) I once worked in HPE for nearly two years, and HPE has its own Unix Operating System: HP-UX. Actually, my team did HP-UX related work before I joined in, but at that time, all team’s work was already switched to Linux. Why? Because Linux had dominated the servers market then. To earn money, more and more resource should be invested in Linux area, and for HP-UX, basic maintenance and development is enough. As fat as I know, HP-UX is still the backbone of many critical services, such as banks, telecommunication Operators, etc. But even for HPE itself, HP-UX‘s priority becomes very lower now.

(2) There is a service which was launched in mid-1990s. It is a 32-bit program, written in C programming language, and stable enough to serve the people all over the world every day. About ~20 years later, one engineer noticed Year_2038_problem because the program definitely use time_t which is 32-bit long. He began to discuss with leader to transform the program to 64-bit, but both of them knew it was not as simple as adding only -m64 compile option. After ~20 years, the code had become “mature”, what I mean is the code repository was very large; about ~40 engineers had ever committed code, and some modules had changed into a total “black box”, what I mean is no one knew the logic behind it, but it really worked as a charm! To transform it into 64-bit program, maybe the compilation can pass, but no one know whether it indeed work! It needs careful code review and sufficient testing, but seems not worth cause it is a problem which will occur ~20 years later. Therefore, this task lies in “Todo” list year by year. Every one pray the service should be shut down by 2038.

These examples all narrate one fact, most “outdated technology” are not too “bad”, such as Cobol, HP-UX or 32-bit program, but for some reasons, they are not main-steam now. For most companies, the overhaul of services which are constructed by these “outdated technology” is not accepted: besides the notable time & person cost, one glitch can bring catastrophic result, even can let company close. But at the other side, the amount of engineers who master these “outdated technology” also become smaller and smaller, so the companies can only explore the potentials from their internal staffs mostly.

For engineers, there is also a dilemma. You can pick “outdated technology” as a hobby, but there is a huge risk to adopt it as a full-time job. Maybe one day, you need to find job again, and many biased companies will reject you just because they deem you don’t know some “hyped technology”, and can only tame dinosaurs. Ridiculous! Isn’t it? But it is the reality we must adapt to.