First read in a data file "20newsgroups" from sklearn's own data sets

In [10]:
from sklearn.datasets import fetch_20newsgroups

dataset = fetch_20newsgroups(shuffle=True, random_state=1, remove=('headers', 'footers', 'quotes'))
documents = dataset.data
print(dataset.target_names)
print(len(documents))
#print(documents[0])
['alt.atheism', 'comp.graphics', 'comp.os.ms-windows.misc', 'comp.sys.ibm.pc.hardware', 'comp.sys.mac.hardware', 'comp.windows.x', 'misc.forsale', 'rec.autos', 'rec.motorcycles', 'rec.sport.baseball', 'rec.sport.hockey', 'sci.crypt', 'sci.electronics', 'sci.med', 'sci.space', 'soc.religion.christian', 'talk.politics.guns', 'talk.politics.mideast', 'talk.politics.misc', 'talk.religion.misc']
11314

Then vectorize the text files

In [11]:
from sklearn.feature_extraction.text import TfidfVectorizer, CountVectorizer

no_features = 1000

# NMF is able to use tf-idf
#tfidf_vectorizer = TfidfVectorizer(max_df=0.95, min_df=2, max_features=no_features, stop_words='english')
#tfidf = tfidf_vectorizer.fit_transform(documents)
#tfidf_feature_names = tfidf_vectorizer.get_feature_names()

# LDA can only use raw term counts for LDA because it is a probabilistic graphical model
tf_vectorizer = CountVectorizer(max_df=0.95, min_df=2, max_features=no_features, stop_words='english')
tf = tf_vectorizer.fit_transform(documents)
tf_feature_names = tf_vectorizer.get_feature_names()

Then call the LDA algorithm to fit a topic model, and transform all documents to their topic distrinbutions

In [28]:
from sklearn.decomposition import NMF, LatentDirichletAllocation

no_topics = 20

# Run NMF
#nmf = NMF(n_components=no_topics, random_state=1, alpha=.1, l1_ratio=.5, init='nndsvd').fit(tfidf)

# Run LDA
lda = LatentDirichletAllocation(n_components=no_topics, max_iter=5, learning_method='online', learning_offset=50.,random_state=0)
lda_z = lda.fit_transform(tf)
In [4]:
def display_topics(model, feature_names, no_top_words):
    for topic_idx, topic in enumerate(model.components_):
        print("Topic %d:" % (topic_idx))
        print(" ".join([feature_names[i]
                        for i in topic.argsort()[:-no_top_words - 1:-1]]))

no_top_words = 10
In [26]:
display_topics(lda, tf_feature_names, no_top_words)
Topic 0:
people gun state control right guns crime states law police
Topic 1:
time question book years did like don space answer just
Topic 2:
mr line rules science stephanopoulos title current define int yes
Topic 3:
key chip keys clipper encryption number des algorithm use bit
Topic 4:
edu com cs vs w7 cx mail uk 17 send
Topic 5:
use does window problem way used point different case value
Topic 6:
windows thanks know help db does dos problem like using
Topic 7:
bike water effect road design media dod paper like turn
Topic 8:
don just like think know people good ve going say
Topic 9:
car new price good power used air sale offer ground
Topic 10:
file available program edu ftp information files use image version
Topic 11:
ax max b8f g9v a86 145 pl 1d9 0t 34u
Topic 12:
government law privacy security legal encryption court fbi technology information
Topic 13:
card bit memory output video color data mode monitor 16
Topic 14:
drive scsi disk mac hard apple drives controller software port
Topic 15:
god jesus people believe christian bible say does life church
Topic 16:
year game team games season play hockey players league player
Topic 17:
10 00 15 25 20 11 12 14 16 13
Topic 18:
armenian israel armenians war people jews turkish israeli said women
Topic 19:
president people new said health year university school day work
In [29]:
#print(lda.components_[0][:5])
print(lda_z.shape)
print(lda_z[0])
print(documents[0])
#print(documents.shape)
(11314, 20)
[0.00172414 0.00172414 0.00172414 0.00172414 0.00172414 0.00172414
 0.00172414 0.46904086 0.14219449 0.00172414 0.00172414 0.00172414
 0.00172414 0.00172414 0.00172414 0.00172414 0.00172414 0.00172414
 0.3594543  0.00172414]
Well i'm not sure about the story nad it did seem biased. What
I disagree with is your statement that the U.S. Media is out to
ruin Israels reputation. That is rediculous. The U.S. media is
the most pro-israeli media in the world. Having lived in Europe
I realize that incidences such as the one described in the
letter have occured. The U.S. media as a whole seem to try to
ignore them. The U.S. is subsidizing Israels existance and the
Europeans are not (at least not to the same degree). So I think
that might be a reason they report more clearly on the
atrocities.
	What is a shame is that in Austria, daily reports of
the inhuman acts commited by Israeli soldiers and the blessing
received from the Government makes some of the Holocaust guilt
go away. After all, look how the Jews are treating other races
when they got power. It is unfortunate.

In [15]:
#display_topics(nmf, tfidf_feature_names, no_top_words)
Topic 0:
people time right did good said say make way government
Topic 1:
window problem using server application screen display motif manager running
Topic 2:
god jesus bible christ faith believe christian christians sin church
Topic 3:
game team year games season players play hockey win league
Topic 4:
new 00 sale 10 price offer shipping condition 20 15
Topic 5:
thanks mail advance hi looking info help information address appreciated
Topic 6:
windows file files dos program version ftp ms directory running
Topic 7:
edu soon cs university ftp internet article email pub david
Topic 8:
key chip clipper encryption keys escrow government public algorithm nsa
Topic 9:
drive scsi drives hard disk ide floppy controller cd mac
Topic 10:
just ll thought tell oh little fine work wanted mean
Topic 11:
does know anybody mean work say doesn help exist program
Topic 12:
card video monitor cards drivers bus vga driver color memory
Topic 13:
like sounds looks look bike sound lot things really thing
Topic 14:
don know want let need doesn little sure sorry things
Topic 15:
car cars engine speed good bike driver road insurance fast
Topic 16:
ve got seen heard tried good recently times try couple
Topic 17:
use used using work available want software need image data
Topic 18:
think don lot try makes really pretty wasn bit david
Topic 19:
com list dave internet article sun hp email ibm phone
In [42]:
# to read in files from a folder
import os
files = {}
filepath = '../A-data/mallet-sample-data'
for filename in os.listdir(filepath):
    print(filename)
    if filename.endswith(".txt"):
        fpath = filepath + '/' + filename
        with open(fpath, "r") as file:
            files[filename] = file.read()
print(len(files)) 

#for filename, text in files.items():
#    print(filename)
#    print("=" * 80)
#    print(text)
thylacine.txt
elizabeth_needham.txt
gunnhild.txt
uranus.txt
yard.txt
zinta.txt
equipartition_theorem.txt
sunderland_echo.txt
thespis.txt
hawes.txt
hill.txt
shiloh.txt
12
In [ ]: