The last decades witnessed a significant evolution in terms of data generation, management, and maintenance. This has resulted in vast amounts of data available in a variety of forms and formats such as RDF. As RDF data is represented as a complex graph structure, applying machine learning algorithms to extract valuable knowledge and insights from them is not straightforward, especially when the size of data is enormous. Although Knowledge Graph Embedding models (KGEs) convert the RDF graphs to low-dimensional vector spaces, however, the vector suffers from a lack of explainability. On the contrary, in this paper, we introduce a generic, distributed, and scalable software-framework that is capable of transforming big RDF data into an explainable feature matrix. This matrix can be exploited in many standard machine learning algorithms. Our approach, by exploiting semantic web and big data technologies, is able to extract a variety of existing features by deep traversing the given large RDF graph. The proposed framework is open-source, well-documented, and fully integrated into the Semantic Analytics Stack (SANSA). The experiments on real-world use-cases disclose that the extracted features can be successfully used in machine learning tasks like classification.