- AI Bits
- Posts
- 💻The Dark Web AI, The end of Photoshop, GPT4 vs PaLM2, and Luma AI!
💻The Dark Web AI, The end of Photoshop, GPT4 vs PaLM2, and Luma AI!
Catch up with this weeks A.I. news and learn about another A.I. tool.
Hey everyone!
Welcome to the new week and back to a bunch of incredible AI updates that have happened in the past few days!
Here’s what we’ll cover:
Scientists train new AI exclusively on the Dark Web 🕶️
The ultimate AI substitution for Photoshop: DragGAN 🖼️
A benchmark that rates the best AI text-generators 💥
Get Drone Shots from your phone using LumaAI! ✈️
Reading time: 5 minutes 10 seconds
Enjoy!
News Reel:
Highlight 1: Scientists train a new AI, DarkBERT, exclusively on the Dark Web!
AI bots are trained on extremely large datasets. But what if this dataset was from The Dark Web? The Dark Web is part of the internet that isn’t indexed by the internet and is home to a lot of criminal activity 🦹🏽. Here’s what a few researchers from South Korea found:
The researchers have developed an AI model called DarkBERT to explore and index the dark web, the hidden and anonymous part of the internet associated with illegal activities 🥷🏽.
DarkBERT aims to contribute to the fight against cybercrime and leverage natural language processing techniques 💻.
The model was connected to the Tor browser (like Chrome or Safari etc. browsers you may use), which allows access to the dark web, and it created a database of collected raw data 🌐.
DarkBERT outperformed other language models, including RoBERTa, in understanding and analyzing dark web content 📈.
Potential applications of DarkBERT include detecting sites selling ransomware or leaking confidential data, as well as monitoring dark web forums for illicit information exchange 💿.
The effectiveness of DarkBERT and the ethical implications of using AI to police the internet remain subjects of discussion and scrutiny and is yet to be peer-to-peer reviewed 🔬.
Image generated by author
We were lucky that the researchers leveraged their knowledge, the resources from the Dark Web and the capabilities of AI for cybersecurity. However, what if someone used it to accelerate the activities taking place on the Dark Web?😥
Here’s the research paper for those interested and the whole article from The_Byte for you to read 🤓.
Highlight 2: GragGan, the end of Photoshop.
We all know how useful Photoshop is when it comes to image editing…but imaging doing all of that without the need to learn every single command. Well, that is exactly what DragGAN does!💥 Developed by researchers from Max Planck Institute for Informatics, MIT, and Google, DragGAN makes editing pictures as easy as drag and drop. As you can see blow, the results are incredible! 🖼️
Image by DragGAN’s team
DragGAN consists of two main parts:
Motion supervision helps guide the point you're moving towards the target position 📷.
Point tracking uses special features to keep an eye on the point you're moving 🔍.
This enables DragGAN to make realistic changes, even for tricky tasks like creating hidden parts of images or changing shapes while maintaining a natural look. This tool allows you to deform an image and have precise control over every pixel’s final position. And that is not all, the team that developed DragGAN has expressed their interest in releasing a model that also works for 3D images too! 🎆
Drag Your GAN: Interactive Point-based Manipulation on the Generative Image Manifold
paper page: huggingface.co/papers/2305.10…
— AK (@_akhaliq)
5:04 AM • May 19, 2023
Unlike Photoshop, which often requires intricate skill and knowledge, DragGAN simplifies complex edits. It excels at precise image deformation by completing the image instantly. Although it m
ay not have all the details Photoshop has to offer, it is indeed very close. You can access the research paper here, which has amazing video-walkthroughs of the model. Enjoy! 💫
Highlight 3: The definite benchmark for LLMs!
We all have read it somewhere…but what does LLM really stand for? It stands for “Large Language Model”, a computer program that has been trained to understand and generate human-like text. It's designed to process and analyze written language just like we do. OpenAI’s ChatGPT, Google’s Bard, and Anthropic’s Claude (mentioned a couple of newsletters ago) all run through a Large Language Model 🗣️. However, as you may know, each company has been working very hard to release the most powerful one…and we have found a benchmark that gives us a pretty good understanding of which ones are the best! 🥇
Image by vprelovac
In this benchmark you can find different versions of each LLM, and Google’s PaLM2 is called chat-bison! This is a summary of the results:
Use GPT-4 if you need best quality 😎.
Use claude-instant-v1 for everything else 🤓.
Google PaLM2 is nowhere near OpenAI/Anthropic 😣.
OpenAI models are painfully slow compared to the rest ⌛.
Open-source models are not as reliable 😥.
You may be wondering how quality was measured in this benchmark. Well, every model was challenge with a set of “problems”, and the benchmark will output a score depending on the type of response the model answered with. These are some examples of these mentioned “problems”🏫:
"Solve the quadratic equation: x^2 - 5x + 6 = 0". Answer: x=2, x=3 🔢
"Convert December 21 1:50pm pacific to Taipei time". Answer: 5:50 am ⌚
A small language riddle: use M instead of P, A instead of E, N instead of A, G instead of C and O instead of H, how to spell peach under this rule? Answer: mango 🥭
Cool, right? Here you can find the GitHub link for the code, and all the details of this benchmark. Have fun! 🧑🏽💻
A.I. Showcase: NeRFs and Luma AI - Drone shots from your phone!
Lumen AI Logo
We discovered this week’s tool thanks to fellow AI enthusiast Karen X. Cheng. Her content is fascinating and will definitely teach you a lot! Luma AI allows you to get a drone shot of any scene by just using your phone! Impossible you say? Well, here’s how you do it 🖼️:
Take a long shot of raw footage, circling around objects numerous times to capture all details. Ensure to take the footage from a higher and lower altitude i.e. from head height and lower 📷.
Download the Luma AI app (only available on the iOS app store) and upload the footage 📲.
About 10 minutes later, you’ll have a 3D environment with all the details you captured in your video 💫.
You can add keyframes and animate your camera angle by navigating with finger gestures OR use the new AR feature that lets you video record as if you were in that environment 🙅🏽.
Stabilize the footage with one click and you’re all done!👆🏽
Here’s a video and Karen X. Cheng’s post, which provide detailed information and a step-by-step tutorial!
Some tips she provides:
Shoot in landscape instead of portrait.
Use a wide lens, if possible.
Uploading the footage is better than capturing it in-app.
Editing can be done in any video-editing app after exported.
P.S. Android users still do not have access to the Luma AI app and will have to use the website version, where you will have to add keyframes manually. 📱
Hope you like it!
AI Art of the Week
We at AI Bits are huge Star Wars fans… and are also animal lovers. So when we came across this Jedi Animal collection from Tretan (Snapchat: tretan_w), we had to share it with you. His work is filled with creativity; we hope you like it as much as we did!
generated by tretan_w
Generated by tretan_w
Generated by tretan_w
Generated by tretan_w
Generated by tretan_w
Generated by tretan_w
You can find them on their Snapchat: tretan_w
See you soon!
-AI Bits
Reply