Software effort estimates are uncertain, given that they are probabilistic assessments of the future. Evaluating their uncertainty involves assigning them an appropriate confidence level and is paramount for satisfying commitments in software projects. However, estimators tend to be overconfident about their estimates, hampering the accuracy of their uncertainty assessments. Our research goal is to identify the factors related to overconfidence and uncertainty assessments in software estimation. To do so, we carried out a Systematic Literature Mapping (SLM), based on automated and snowballing searches. Our findings include eight factors related to overconfidence and uncertainty assessment. Some of them resulted in unexpected implications for practice. We also identified valuable and easy-to-use metrics that software practitioners can apply smoothly in their daily practice. Additionally, very few field and respondent studies exist about the topic. The software engineering area can significantly benefit from investigating how much practitioners know about the overconfidence effect, as well as of a better comprehension of the perceived importance, practices, and accuracy of uncertainty assessments in the software industry.