Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Technology

7 Articles
article-image-what-is-metas-llama-31-405b-how-it-works-use-cases-more
Sami Salkosuo
08 Aug 2024
6 min read
Save for later

What Is Meta's Llama 3.1 405B? How It Works, Use Cases & More

Sami Salkosuo
08 Aug 2024
6 min read
Having 405 billion parameter puts it in contention for a high position on the LMSys Chatbot Arena Leaderboard, a measure of performance scored from blind user votes.In recent months, the top spot has alternated between versions of OpenAI GPT-4, Anthropic Claude 3, and Google Gemini. Currently, GPT-4o holds the crown, but the smaller Claude 3.5 Sonnet takes the second spot, and the impending Claude 3.5 Opus is likely to take the first position if it can be released before OpenAI updates GPT-4o.That means competition at the high end is tough, and it will be interesting to see how Llama 3.1 405B stacks up to these competitors. While we wait for Llama 3.1 405B to appear on the leaderboard, some benchmarks are provided later in the article.Multi-lingual capabilitiesThe main update from Llama 3 to Llama 3.1 is better non-English support. The training data for Llama 3 was 95% English, so it performed poorly in other languages. The 3.1 update provides support for German, French, Italian, Portuguese, Hindi, Spanish, and Thai.Longer contextLlama 3 models had a context window—the amount of text that can be reasoned about at once—of 8k tokens (around 6k words). Llama 3.1 brings this up to a more modern 128k, making it competitive with other state-of-the-art LLMs.This fixes an important weakness for the Llama family. For enterprise use cases like summarizing long documents, generating code that involves context from a large codebase, or extended support chatbot conversations, a long context window that can store hundreds of pages of text is essential.Open model license agreementThe Llama 3.1 models are available under Meta's custom Open Model License Agreement. This permissive license grants researchers, developers, and businesses the freedom to use the model for both research and commercial applications.In a significant update, Meta has also expanded the license to allow developers to utilize the outputs from Llama models, including the 405B model, to enhance other models.In essence, this means that anyone can utilize the model's capabilities to advance their work, create new applications, and explore the possibilities of AI, as long as they adhere to the terms outlined in the agreement.How Llama 3.1 405B Works?This section explains the technical details of how Llama 3.1 405B works, including its architecture, training process, data preparation, computational requirements, and optimization techniques.Transformer architecture with tweaks Llama 3.1 405B is built upon a standard decoder-only Transformer architecture, a design common to many successful large language models.While the core structure remains consistent, Meta has introduced minor adaptations to enhance the model's stability and performance during training. Notably, the Mixture-of-Experts (MoE) architecture is intentionally excluded, prioritizing stability and scalability in the training process.Source: Meta AIThe diagram illustrates how Llama 3.1 405B processes language. It starts with the input text being divided into smaller units called tokens and then converted into numerical representations called token embeddings.These embeddings are then processed through multiple layers of self-attention, where the model analyzes the relationships between different tokens to understand their significance and context within the input.The information gathered from the self-attention layers is then passed through a feedforward network, which further processes and combines the information to derive meaning. This process of self-attention and feedforward processing is repeated multiple times to deepen the model's understanding.Finally, the model uses this information to generate a response token by token, building upon previous outputs to create a coherent and relevant text. This iterative process, known as autoregressive decoding, enables the model to produce a fluent and contextually appropriate response to the input prompt.Multi-phase training processDeveloping Llama 3.1 405B involved a multi-phase training process. Initially, the model underwent pre-training on a vast and diverse collection of datasets encompassing trillions of tokens. This exposure to massive amounts of text allows the model to learn grammar, facts, and reasoning abilities from the patterns and structures it encounters.Following pre-training, the model undergoes iterative rounds of supervised fine-tuning (SFT) and direct preference optimization (DPO). SFT involves training on specific tasks and datasets with human feedback, guiding the model to produce desired outputs.DPO, on the other hand, focuses on refining the model's responses based on preferences gathered from human evaluators. This iterative process progressively enhances the model's ability to follow instructions, improve the quality of its responses, and ensure safety.Data quality and quantityMeta claims to have strongly emphasized the quality and quantity of training data. For Llama 3.1 405B, this involved a rigorous data preparation process, including extensive filtering and cleaning to enhance the overall quality of the datasets.Interestingly, the 405B model itself is used to generate synthetic data, which is then incorporated into the training process to further refine the model's capabilities.Scaling up computationallyTraining a model as large and complex as Llama 3.1 405B requires a tremendous amount of computing power. To put it in perspective, Meta used over 16,000 of NVIDIA's most powerful GPUs, the H100, to train this model efficiently.They also made significant improvements to their entire training infrastructure to ensure it could handle the immense scale of the project, allowing the model to learn and improve effectively.Quantization for inferenceTo make Llama 3.1 405B more usable in real-world applications, Meta applied a technique called quantization, which involves converting the model's weights from 16-bit precision (BF16) to 8-bit precision (FP8). This is like switching from a high-resolution image to a slightly lower resolution: it preserves the essential details while reducing the file size.Similarly, quantization simplifies the model's internal calculations, making it run much faster and more efficiently on a single server. This optimization makes it easier and more cost-effective for others to utilize the model's capabilities.Llama 3.1 405B Use CasesLlama 3.1 405B offers various potential applications thanks to its open-source nature and large capabilities.Synthetic data generationThe model's ability to generate text that closely resembles human language can be used to create large amounts of synthetic data.This synthetic data can be valuable for training other language models, enhancing data augmentation techniques (making existing data more diverse), and developing realistic simulations for various applications.Model distillationThe knowledge embedded within the 405B model can be transferred to smaller, more efficient models through a process called distillation.Think of model distillation as teaching a student (a smaller AI model) the knowledge of an expert (the larger Llama 3.1 405B model). This process allows the smaller model to learn and perform tasks without needing the same level of complexity or computational resources as the larger model.This makes it possible to run advanced AI capabilities on devices like smartphones or laptops, which have limited power compared to the powerful servers used to train the original model.A recent example of model distillation is OpenAI’s GPT-4o mini, which is a distilled version of GPT-4o.Research and experimentationLlama 3.1 405B serves as a valuable research tool, enabling scientists and developers to explore new frontiers in natural language processing and artificial intelligence.Its open nature encourages experimentation and collaboration, accelerating the pace of discovery.Industry-specific solutionsBy adapting the model to data specific to particular industries, such as healthcare, finance, or education, it's possible to create custom AI solutions that address the unique challenges and requirements of those domains.
Read more
  • 3
  • 12
  • 1056

article-image-test-11
rohith
03 Jul 2023
1 min read
Save for later

Test 11

rohith
03 Jul 2023
1 min read
sfdg his df sdiohfipsdj  sdifhsdjpfhq 9dqi diaohsd8iagb8owd eaw' a8gwbehdisaj'dasid adasdsfcfgsfg lsijf ni0whe9f wh9wef8hwe9 h8w9 he9r0fwe 09hfw9eh0s9hr 8fqh  wad'wq8egsfb8qg 8hed q8g8dwe8r8hewodhasb h8a igdb89wehf8 q9as fhworhf0weo8shfd8oh8fhwe8hr8hwe8 8wuegf8w ef wh8fehw eh fweb fw]s fh9w8eh8qehq8whrohwe rwerwe
Read more
  • 0
  • 0
  • 360

article-image-4-ways-to-treat-a-hallucinating-ai-with-prompt-engineering
Andrei Gheorghiu
02 Jun 2023
9 min read
Save for later

4 Ways to Treat a Hallucinating AI with Prompt Engineering

Andrei Gheorghiu
02 Jun 2023
9 min read
Hey there, fellow AI enthusiast! Are you tired of your LLM (Large Language Model) creating random, nonsensical outputs? Fear not, because today I’m opening the box of prompt engineering pills looking for something to help you reduce those pesky hallucinations.First, let's break down what we're dealing with. Prompt engineering is the art of creating input prompts for AI models in a way that guides them towards generating more accurate, relevant, and useful responses. Think of it as gently nudging your AI model in the right direction, so it doesn't end up lost in a sea of information. The word “engineering” was probably not the wisest choice in many people’s opinion but that’s already history as everybody got used to it as it is. In my opinion, it’s more of a mix of logical thinking, creativity, language, and problem-solving skills. It feels a lot like writing code but using just natural language instead of structured syntax and vocabulary. While the user gets the freedom of using their own language and depth, with great freedom comes great responsibility. An average prompt will probably result in an average answer. The issue I’m addressing in this article is just one example from the many pitfalls that can be avoided with some basic prompt hygiene when interacting with AI.Now, onto the bizarre world of hallucinations. In the AI realm, hallucinations refer to instances when an AI model (particularly LLMs) generates output that is unrelated, implausible, or just plain weird. Some of you may have been there already, asking an AI model like GPT-3 to write a paragraph about cats, only to get a response about aliens invading Earth! And while the issue has been greatly mitigated in GPT-4 and similar newer AI models, it’s still something to be concerned about, especially if you’re looking for precise, fact-based responses. To make matters worse, sometimes the hallucinated answer sounds very convincing and seems to be plausible in the given context.For example, when asked the name of the Voodoo Lady in the Monkey Island series of games ChatGPT provides a series of convincing answers, all of which are wrong: It’s a bit of a trick question, as she is simply known as the Voodoo Lady in the original series of games, but you can see how convinced ChatGPT is of the answers that it provides (and continued to provide). If I hadn’t already known the answer, then I never would have known that ChatGPT was hallucinating. What Are the Technical Reasons Why AI Models Hallucinate? Training Data: Machine learning models are trained on vast amounts of text data from diverse sources. This data may contain inconsistencies, noise, and biases. As a result, when generating text, the model might output content that is influenced by these inconsistencies or noise, leading to hallucinations.Probabilistic Nature: Generative models like GPTs are based on probabilistic techniques that predict the next token (e.g., word or character) in a sequence, given the context. They estimate the likelihood of each token appearing and sample tokens based on these probabilities. If you’ve ever watched “Family Feud” on TV, you get a pretty good idea of what token prediction means. This sampling process can sometimes result in unpredictable and implausible outputs, as the model might choose less likely tokens, generating hallucinations. To make matters worse, GPTs are usually not built to say "I don't know" when they lack information. Instead, they produce the most likely answer.  Lack of Ground Truth: Unlike supervised learning tasks where there is a clear ground truth for the model to learn from, generative tasks do not have a single correct output. Most LLMs that we use do not have the capability to check the facts in their output against a real-time validated source as they do not have Internet access. The absence of a ground truth can make it difficult for the model to learn constraints and discern what is plausible or correct, leading to the generation of hallucinated content.  Optimization Challenges: During training, the models are optimized using a loss function that measures the discrepancy between the generated output and the expected outcome. In generative tasks, this loss function may not always capture the nuances of human language, making it difficult for the model to learn the correct patterns and avoid hallucinations.Model Complexity: State-of-the-art generative models like GPT-3 have billions of parameters that make them highly expressive and capable of capturing complex patterns in the data. However, this complexity can also result in overfitting and memorization of irrelevant or spurious patterns, causing hallucinations in generated outputs.So, clearly, we have a problem to solve. Here are four tips for how to improve your prompts and get better responses from ChatGPT. Four Tips for Improving Your Prompts Not being clear and specific in your promptsTo get the best results, you must clearly understand the problem yourself first. Make sure you know what you want to achieve and keep your prompts focused on that objective. The more explicit your prompt, the better the AI model can understand what you're looking for. So instead of asking, "Tell me about the Internet," try something like, "Explain how the Internet works and its importance in modern society." By doing this, you're giving your AI model a clearer picture of what you want. Sometimes you’ll have to make your way through multiple prompt iterations to get the result you’re after. Sometimes results you'll get may steer away from the initial topic. Make sure to stay on track and avoid deviating from the task at hand. Make sure you bring the conversation back in focus, otherwise the hallucination effect may amplify. Ignoring the power of an exampleEveryone loves examples they say, even AI models! Providing examples in your prompt helps your model understand the context and generate more accurate responses. For instance, "Write a brief history of Python, similar to how the history of Java is described in this article {example}" This not only gives the AI a clear topic but also a reference point to follow. Providing a well-structured example can also save you a lot of time in explaining the output you’re expecting to receive. Without an example your prompt might be too generic, allowing too much freedom in interpretation. Think about it like a conversation. Sometimes, the best approach to make yourself understood by the other party is to provide an example. Do you want to make sure there’s no misunderstanding from the start? Include an example in your initial prompt. Not following “Divide et Impera”Have you ever tried to build IKEA furniture without instructions? It's a bit like that for AI models dealing with complex prompts. Too many nuts and bolts to keep track of. Too many variables to consider. Instead of asking the model to "Explain the process of creating a neural network," break it down into smaller, more manageable tasks like, "Step 1: Define the problem, Step 2: Collect and prepare data," and so on. This way, the AI can tackle each step individually and generate more coherent outputs. It’s also very useful when you are trying to generate a more verbose and comprehensive response and not just a simple factual answer. You can, of course, combine both approaches asking the AI to provide the steps first, and then asking for more information on each step. Relying on the first response you receiveAs most LLMs in use today do not provide enough transparency in their reasoning process, working with them sometimes feels like interacting with a magic box. The non-deterministic nature of generative AI can further amplify this problem, so when you need precision it's best to experiment with various prompt formats and compare the results. Pro tip: some open-source models can already be queried in parallel using this website:   Or, when interacting with a single AI model, try multiple approaches for your query like rephrasing the prompt, asking a question or presenting it as a statement.For example, if you're looking for information about cloud computing, you could try:"What is cloud computing and how does it work?""Explain cloud computing and its benefits.""Cloud computing has transformed the IT industry; discuss its impact and future potential."Some LLMs, such as Google's Bard, provide multiple responses by default so you can pick the most suitable from among them. Compare the outputs. Validate any important facts with other independent sources. Look for implausible or weird responses. Although a hallucination is possible, by using different prompts you’ll greatly reduce the probability of generating the same hallucination every time and therefore it’s going to be easier to detect it.Returning to our Voodoo Lady example earlier, by rephrasing the question we can get the right answer from ChatGPT.  And there you have it! By trying to avoid these common mistakes you'll be well on your way to minimizing AI hallucinations and getting the output you're looking for. We all know how fast and unpredictable this domain can be, so the best approach is to learn together and share best practices among the community. The best prompt engineering books have not yet been written and there’s a ton of new things to learn about this emergent technology, so let’s stay in touch and share our findings!Happy prompting!  About the Author Andrei Gheorghiu is an experienced trainer with a passion for helping learners achieve their maximum potential. He always strives to bring a high level of expertise and empathy to his teaching. With a background in IT audit, information security, and IT service management, Andrei has delivered training to over 10,000 students across different industries and countries. He is also a Certified Information Systems Security Professional and Certified Information Systems Auditor, with a keen interest in digital domains like Security Management and Artificial Intelligence. In his free time, Andrei enjoys trail running, photography, video editing and exploring the latest developments in technology.You can connect with Andrei on:LinkedinTwitter
Read more
  • 1
  • 36
  • 1682
Banner background image

article-image-lorem-ipsum-dolor-sit-amet-honor
Surajit Basak
30 May 2023
3 min read
Save for later

Lorem ipsum dolor sit amet honor

Surajit Basak
30 May 2023
3 min read
What is Lorem Ipsum?Lorem Ipsum is simply dummy text of the printing and typesetting industry. Lorem Ipsum has been the industry's standard dummy text ever since the 1500s, when an unknown printer took a galley of type and scrambled it to make a type specimen book. It has survived not only five centuries, but also the leap into electronic typesetting, remaining essentially unchanged. It was popularised in the 1960s with the release of Letraset sheets containing Lorem Ipsum passages, and more recently with desktop publishing software like Aldus PageMaker including versions of Lorem Ipsum.Why do we use it?It is a long established fact that a reader will be distracted by the readable content of a page when looking at its layout. The point of using Lorem Ipsum is that it has a more-or-less normal distribution of letters, as opposed to using 'Content here, content here', making it look like readable English. Many desktop publishing packages and web page editors now use Lorem Ipsum as their default model text, and a search for 'lorem ipsum' will uncover many web sites still in their infancy. Various versions have evolved over the years, sometimes by accident, sometimes on purpose (injected humour and the like). Where does it come from?Contrary to popular belief, Lorem Ipsum is not simply random text. It has roots in a piece of classical Latin literature from 45 BC, making it over 2000 years old. Richard McClintock, a Latin professor at Hampden-Sydney College in Virginia, looked up one of the more obscure Latin words, consectetur, from a Lorem Ipsum passage, and going through the cites of the word in classical literature, discovered the undoubtable source. Lorem Ipsum comes from sections 1.10.32 and 1.10.33 of "de Finibus Bonorum et Malorum" (The Extremes of Good and Evil) by Cicero, written in 45 BC. This book is a treatise on the theory of ethics, very popular during the Renaissance. The first line of Lorem Ipsum, "Lorem ipsum dolor sit amet..", comes from a line in section 1.10.32.The standard chunk of Lorem Ipsum used since the 1500s is reproduced below for those interested. Sections 1.10.32 and 1.10.33 from "de Finibus Bonorum et Malorum" by Cicero are also reproduced in their exact original form, accompanied by English versions from the 1914 translation by H. Rackham.Where can I get some?There are many variations of passages of Lorem Ipsum available, but the majority have suffered alteration in some form, by injected humour, or randomised words which don't look even slightly believable. If you are going to use a passage of Lorem Ipsum, you need to be sure there isn't anything embarrassing hidden in the middle of text. All the Lorem Ipsum generators on the Internet tend to repeat predefined chunks as necessary, making this the first true generator on the Internet. It uses a dictionary of over 200 Latin words, combined with a handful of model sentence structures, to generate Lorem Ipsum which looks reasonable. The generated Lorem Ipsum is therefore always free from repetition, injected humour, or non-characteristic words etc.
Read more
  • 1
  • 19
  • 235

article-image-how-to-add-code-block-support-on-ck-editor-in-vue-3-project
Surajit Basak
27 May 2023
2 min read
Save for later

How to add code block support on CK Editor in Vue 3 Project

Surajit Basak
27 May 2023
2 min read
Exercitationem voluptatibus saepe veritatis quibusdam similique. Sed consequatur dolores sunt fuga et. Voluptas rerum reiciendis rerum velit et eos. Ut ut ex sunt consectetur rem cum. Quia ut veritatis minus ad. Aliquid suscipit dicta consequatur est sunt beatae. Quas vel unde dolorem maiores non reiciendis. Tempora laborum et necessitatibus suscipit error repellat. Doloremque quibusdam et nisi excepturi dolorum quia eveniet. Voluptas sed quibusdam numquam non sunt consequatur. Iste et illum provident modi aut qui. Et facere iusto ut earum repudiandae. Modi voluptatibus doloribus eaque iusto quos aspernatur. In officia eum et dolor. Atque minima odit harum omnis quos provident. Sequi deleniti id saepe quam iusto est omnis.Qoute Block StyleAn object at rest remains at rest, and an object in motion remains in motion at constant speed and in a straight line unless acted on by an unbalanced force.- By Isaac NewtonThe acceleration of an object depends on the mass of the object and the amount of force applied.- By Isaac NewtonWhenever one object exerts a force on another object, the second object exerts an equal and opposite on the first.- By Isaac NewtonTesting for image block using file uploader:Testing for image that was uploaded using url (Centered image): Slide image:How can I add code block support to CKEditor 5 on vue 2 projectAdd first you need to install the ckeditor code block:npm i @ckeditor/ckeditor5-code-blockWhen install is finished import the plugin:import { CodeBlock } from '@ckeditor/ckeditor5-code-block';Then Add that code block to your ck editor config:editorConfig: { plugins: [CodeBlock], toolbar: { items: ['codeBlock'] }, codeBlock: { languages: [ { language: 'plaintext', label: 'Plain text' }, // The default language. { language: 'c', label: 'C' }, { language: 'cs', label: 'C#' }, { language: 'cpp', label: 'C++' }, { language: 'css', label: 'CSS' }, { language: 'diff', label: 'Diff' }, { language: 'html', label: 'HTML' }, { language: 'java', label: 'Java' }, { language: 'javascript', label: 'JavaScript' }, { language: 'php', label: 'PHP' }, { language: 'python', label: 'Python' }, { language: 'ruby', label: 'Ruby' }, { language: 'typescript', label: 'TypeScript' }, { language: 'xml', label: 'XML' } ] } }Here is the demo php code:<?php class MySpecialClass { public function __construct(User $user, Service $service, Product $product) {} public function authenticate() { $this->user->login(); } public function process() { $this->product->generateUUID(); try { return $this->service->validate($this->product) ->deliver(); } catch (MySpecialErrorHandler $e) { $e->throwError(); } } }
Read more
  • 2
  • 7
  • 1358

article-image-test-article
859
19 May 2023
2 min read
Save for later

Test Article

859
19 May 2023
2 min read
orem ipsum dolor sit amet, consectetur adipiscing elit. Mauris eu purus eget ligula ultrices tristique. Aliquam eget ligula eget turpis euismod fringilla vitae et neque. Sed sagittis turpis id facilisis mollis. Nunc id quam et odio consectetur ullamcorper. Donec tincidunt gravida dolor a feugiat. Suspendisse a orci ut ex maximus bibendum. Aliquam feugiat facilisis felis ut tempor.Phasellus dignissim massa vel dui semper, sed malesuada lorem condimentum. Suspendisse potenti. Mauris ut elit lectus. Integer non tellus rutrum, aliquet leo in, vestibulum magna. Vestibulum facilisis sem vel nunc fringilla condimentum. Sed dictum, neque in consectetur eleifend, nulla orci interdum urna, a lacinia elit mi vel erat. Suspendisse potenti. Nulla sed ultricies mauris. Sed tincidunt bibendum enim id facilisis. Fusce at leo scelerisque, consectetur velit vitae, pretium mi. Sed vitae nisl id lectus vulputate ultrices.Sed pulvinar lectus in lobortis euismod. Integer at diam dolor. Quisque nec metus cursus, consectetur enim id, scelerisque mauris. Mauris commodo ipsum ut fringilla eleifend. Aliquam vitae vestibulum lorem. In iaculis lobortis mi ut dapibus. Aenean feugiat sagittis ex vitae viverra. Suspendisse congue odio eget arcu vestibulum, a rhoncus lacus suscipit. Quisque auctor odio ac bibendum consequat.In lobortis elit sed nibh tristique, nec gravida purus accumsan. Donec eleifend libero ac nisi eleifend luctus. Curabitur varius nunc et metus tincidunt, ut hendrerit elit viverra. Sed eleifend, purus sit amet iaculis efficitur, lorem est vulputate ligula, id mattis lacus justo vel sapien. Sed vitae dapibus metus. Aenean viverra eros at ligula mattis dapibus. Quisque eu imperdiet nisl, at fermentum mauris. Vestibulum non urna vitae arcu tincidunt maximus. Donec ullamcorper diam sed odio luctus rutrum. Nullam pharetra mauris sed ligula pellentesque, vitae facilisis ex mattis. Vestibulum vulputate hendrerit sem. Suspendisse et tortor eu est vulputate viverra ac non ligula. Nam efficitur purus arcu, id tincidunt odio placerat vitae.
Read more
  • 0
  • 0
  • 236
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at AU $19.99/month. Cancel anytime
article-image-this-is-test-title-packt-jskdj-ishdi-data-science-for-testing-java-python-testcafe-open-ai-for-all-of-you
661
17 May 2023
7 min read
Save for later

This is test title packt jskdj ishdi data science for testing java python testcafe open ai for all of you

661
17 May 2023
7 min read
Lorem Ipsum"Neque porro quisquam est qui dolorem ipsum quia dolor sit amet, consectetur, adipisci velit...""There is no one who loves pain itself, who seeks after it and wants to have it, simply because it is pain..."What is Lorem Ipsum?Lorem Ipsum is simply dummy text of the printing and typesetting industry. Lorem Ipsum has been the industry's standard dummy text ever since the 1500s, when an unknown printer took a galley of type and scrambled it to make a type specimen book. It has survived not only five centuries, but also the leap into electronic typesetting, remaining essentially unchanged. It was popularised in the 1960s with the release of Letraset sheets containing Lorem Ipsum passages, and more recently with desktop publishing software like Aldus PageMaker including versions of Lorem Ipsum.Why do we use it?It is a long established fact that a reader will be distracted by the readable content of a page when looking at its layout. The point of using Lorem Ipsum is that it has a more-or-less normal distribution of letters, as opposed to using 'Content here, content here', making it look like readable English. Many desktop publishing packages and web page editors now use Lorem Ipsum as their default model text, and a search for 'lorem ipsum' will uncover many web sites still in their infancy. Various versions have evolved over the years, sometimes by accident, sometimes on purpose (injected humour and the like). Where does it come from?Contrary to popular belief, Lorem Ipsum is not simply random text. It has roots in a piece of classical Latin literature from 45 BC, making it over 2000 years old. Richard McClintock, a Latin professor at Hampden-Sydney College in Virginia, looked up one of the more obscure Latin words, consectetur, from a Lorem Ipsum passage, and going through the cites of the word in classical literature, discovered the undoubtable source. Lorem Ipsum comes from sections 1.10.32 and 1.10.33 of "de Finibus Bonorum et Malorum" (The Extremes of Good and Evil) by Cicero, written in 45 BC. This book is a treatise on the theory of ethics, very popular during the Renaissance. The first line of Lorem Ipsum, "Lorem ipsum dolor sit amet..", comes from a line in section 1.10.32.The standard chunk of Lorem Ipsum used since the 1500s is reproduced below for those interested. Sections 1.10.32 and 1.10.33 from "de Finibus Bonorum et Malorum" by Cicero are also reproduced in their exact original form, accompanied by English versions from the 1914 translation by H. Rackham.Where can I get some?There are many variations of passages of Lorem Ipsum available, but the majority have suffered alteration in some form, by injected humour, or randomised words which don't look even slightly believable. If you are going to use a passage of Lorem Ipsum, you need to be sure there isn't anything embarrassing hidden in the middle of text. All the Lorem Ipsum generators on the Internet tend to repeat predefined chunks as necessary, making this the first true generator on the Internet. It uses a dictionary of over 200 Latin words, combined with a handful of model sentence structures, to generate Lorem Ipsum which looks reasonable. The generated Lorem Ipsum is therefore always free from repetition, injected humour, or non-characteristic words etc.  paragraphs words bytes lists Start with 'Loremipsum dolor sit amet...'   Translations: Can you help translate this site into a foreign language ? Please email us with details if you can help.There is a set of mock banners available here in three colours and in a range of standard banner sizes:Donate: If you use this site regularly and would like to help keep the site on the Internet, please consider donating a small sum to help pay for the hosting and bandwidth bill. There is no minimum donation, any sum is appreciated - click here to donate using PayPal. Thank you for your support.Donate Bitcoin: 16UQLq1HZ3CNwhvgrarV6pMoA2CDjb4tyFNodeJS Python Interface GTK Lipsum Rails .NET GroovyThe standard Lorem Ipsum passage, used since the 1500s"Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum."Section 1.10.32 of "de Finibus Bonorum et Malorum", written by Cicero in 45 BC"Sed ut perspiciatis unde omnis iste natus error sit voluptatem accusantium doloremque laudantium, totam rem aperiam, eaque ipsa quae ab illo inventore veritatis et quasi architecto beatae vitae dicta sunt explicabo. Nemo enim ipsam voluptatem quia voluptas sit aspernatur aut odit aut fugit, sed quia consequuntur magni dolores eos qui ratione voluptatem sequi nesciunt. Neque porro quisquam est, qui dolorem ipsum quia dolor sit amet, consectetur, adipisci velit, sed quia non numquam eius modi tempora incidunt ut labore et dolore magnam aliquam quaerat voluptatem. Ut enim ad minima veniam, quis nostrum exercitationem ullam corporis suscipit laboriosam, nisi ut aliquid ex ea commodi consequatur? Quis autem vel eum iure reprehenderit qui in ea voluptate velit esse quam nihil molestiae consequatur, vel illum qui dolorem eum fugiat quo voluptas nulla pariatur?"1914 translation by H. Rackham"But I must explain to you how all this mistaken idea of denouncing pleasure and praising pain was born and I will give you a complete account of the system, and expound the actual teachings of the great explorer of the truth, the master-builder of human happiness. No one rejects, dislikes, or avoids pleasure itself, because it is pleasure, but because those who do not know how to pursue pleasure rationally encounter consequences that are extremely painful. Nor again is there anyone who loves or pursues or desires to obtain pain of itself, because it is pain, but because occasionally circumstances occur in which toil and pain can procure him some great pleasure. To take a trivial example, which of us ever undertakes laborious physical exercise, except to obtain some advantage from it? But who has any right to find fault with a man who chooses to enjoy a pleasure that has no annoying consequences, or one who avoids a pain that produces no resultant pleasure?"Section 1.10.33 of "de Finibus Bonorum et Malorum", written by Cicero in 45 BC"At vero eos et accusamus et iusto odio dignissimos ducimus qui blanditiis praesentium voluptatum deleniti atque corrupti quos dolores et quas molestias excepturi sint occaecati cupiditate non provident, similique sunt in culpa qui officia deserunt mollitia animi, id est laborum et dolorum fuga. Et harum quidem rerum facilis est et expedita distinctio. Nam libero tempore, cum soluta nobis est eligendi optio cumque nihil impedit quo minus id quod maxime placeat facere possimus, omnis voluptas assumenda est, omnis dolor repellendus. Temporibus autem quibusdam et aut officiis debitis aut rerum necessitatibus saepe eveniet ut et voluptates repudiandae sint et molestiae non recusandae. Itaque earum rerum hic tenetur a sapiente delectus, ut aut reiciendis voluptatibus maiores alias consequatur aut perferendis doloribus asperiores repellat."1914 translation by H. Rackham"On the other hand, we denounce with righteous indignation and dislike men who are so beguiled and demoralized by the charms of pleasure of the moment, so blinded by desire, that they cannot foresee the pain and trouble that are bound to ensue; and equal blame belongs to those who fail in their duty through weakness of will, which is the same as saying through shrinking from toil and pain. These cases are perfectly simple and easy to distinguish. In a free hour, when our power of choice is untrammelled and when nothing prevents our being able to do what we like best, every pleasure is to be welcomed and every pain avoided. But in certain circumstances and owing to the claims of duty or the obligations of business it will frequently occur that pleasures have to be repudiated and annoyances accepted. The wise man therefore always holds in these matters to this principle of selection: he rejects pleasures to secure other greater pleasures, or else he endures pains to avoid worse pains."[email protected] Policyfunction greet(name) {
Read more
  • 0
  • 0
  • 127