Monthly Archives: September 2015

human longevity – Senescence

Posted: September 25, 2015 at 1:42 am

Welcome to the LongevityMap, a database of human genetic variants associated with longevity. Negative results are also included in the LongevityMap to provide visitors with as much information as possible regarding each gene and variant previously studied in context of longevity. As such, the LongevityMap serves as a repository of genetic association studies of longevity and reflects our current knowledge of the genetics of human longevity.

Searching the LongevityMap can be done by gene or genetic variant (e.g., refSNP number). You can enter one or more words from the gene's name or use the gene's HGNC symbol. Note that the search is case insensitive. It is also possible to search for a specific cytogenetic location but for this you need to tick the box below.

To search for a specific study in the LongevityMap, you may browse or search its literature:

You may download a zipped tab-delimited ASCII dataset with the raw data, derived from the latest stable build of the LongevityMap.

If you find an error or wish to propose a study or variant to be included in the database, please contact us. To receive the latest news and announcements, please join the HAGR-news mailing list.

See the original post:
human longevity - Senescence

Posted in Human Longevity | Comments Off on human longevity – Senescence

Animal Longevity and Scale

Posted: at 1:42 am

San Jos State University applet-magic.com Thayer Watkins Silicon Valley & Tornado Alley USA Animal Longevity and Scale

A useful line of analysis is to consider the effect of scale changes for creatures which are similar in shape and only differ in scale. As the scale of an animal increases the body weight and volume increase with the cube of scale. The volume of blood flow required to feed that bulk also increases with the cube of scale. The cross sectional area of the arteries and the veins required to carry that blood flow only increases with the square of scale. There are other area-volume relationships which impose limitations on creatures. Some of those area-volume constraints, including the above one, are:

Thus to compensate for the body needs which increase with the cube of scale but the areas increase with only the square of scale the average blood flow velocity must increase linearly with scale. Blood flow velocity is driven by pressure differences. The pressure difference must be great enough to carrying the blood flow to the top of the creature and great enough to overcome the resistance in the arteries and veins to the flow. The pressure required to pump blood from the heart to the top of the creature is proportional to scale. The pressure difference required to overcome the resistance to flow through the arteries into the capillaries and back again through the veins is more difficult to characterize in terms of scale. The greater cross sectional area reduces the resistance but the long length increases resistance. The net result of these two scale influences seems to be that the pressure difference required to drive the blood through the bulk of the creature is inversely proportional to scale. The pressure difference imposed would be the maximum of the two required pressure differences.

Shown below are the typical blood pressures for creatures of different scales.

The linear regression of the logarithm of pressure on the logarithm of height yields the following result:

The linear regression of the logarithm of pressure on the logarithm of weight yields:

If blood pressure were proportional to scale then the coefficient for *log(Height) would be 1.0 and for *log(Weight) would be 0.333 since weight to proportional to the cube of scale. The regression coefficients are not close to the theoretical values but they are of the proper order of magnitude for accepting blood pressure as being proportional to scale.

The volume of the heart of a creature is proportional to the cube of scale. The volume of the blood to be moved is also proportional to the cube of scale. From the previous analysis the flow velocity is proportional to scale. Therefore the time required to evacuate the heart's volume is proportional to scale. This means that the heartbeat rate is inversely proportional to scale. The following table gives the heart rates for a number of creatures.

A regression of the logarithm of heart rate on the logarithm of weight yields the following equation:

If heart rate were exactly inversely proportion to scale the coefficient for *log(weight) would be -0.333. This is because scale is proportional to the cube root of weight. The coefficient of -0.2 indicates that the heart rate is given an equation of the form

One salient hypothesis is that the animal heart is good for a fixed number of beats. This hypothesis can be tested by comparing the product of average heart rate and longevity for different animals. Because the heart rate is in beats per minute and longevity is in years the number of heart beats per lifetime is about 526 thousand times the value of the product. The data for a selection of animals are:

Although the lack of dependence is clear visually the confirmation in terms of regression analysis is:

The t-ratio for the slope coefficient is an insignificant 0.15, confirming that there is no dependence of lifetime heartbeats on the scale of animal size.

If a heart is good for just a fixed number of beats, say one billion, then heart longevity is this fixed quota of beats divided by the heart rate. From the above equation for heart rate, lifespan (limited by heart function) would be proportional to scale raised to the 0.6 power.

The data for testing this deduction are:

For the data in the above table, admittedly very rough and sparse, the regression of the logarithm of the lifespan on the logarithm of weight gives

Thus the net effect of scale on animal longevity is positive. Taking into account that weight is proportional to the cube of the linear scale of an animal the above equation in terms of scale would be

This says that if an animal is built on a 10 percent larger scale it will have a 6 percent longer lifespan.

Originally posted here:
Animal Longevity and Scale

Posted in Human Longevity | Comments Off on Animal Longevity and Scale

Censorship – Censorship | Laws.com

Posted: at 1:41 am

What are Censorship? Censorships are the acts of adjusting, editing, banning, or altering products, expressions, or items considered to be elicit, unlawful, lewd, or objectionable in nature with regard to the setting in which they exist. Although both the parameters and protocol surrounding the wide range of procedure latent within censorships, which can range in nature from broad to particular, a bulk of the classification of materials subject to censorships exist in tandem with applicable legislation based of locale, intent, and the nature of the expression, activity, or item in question.

Legality of Censorship Censorships taking place do so enacting the precepts of Administrative Law. Administrative Law is the legal field associated with events and circumstances in which the Federal Government of the United States engages its citizens. This includes the administration of government programs, the creation of agencies, and the establishment of a legal, regulatory federal standard, and any other procedural legislation enacted between the government and its citizens.

Classification of Censorship The legality applied to the natures of censorships with regard to acts, expressions, and depictions may vary in context with the motivation behind censorships imposed; this means that censorships can take place upon the analysis of the content latent within the item or expression in question or the intent inherent within the item in question. For example, while certain expressions may be tolerated within certain settings, those same expressions may not be permitted in others:

The Miller v. California case was one in which Marvin Miller, who dealt in the sale of products considered to be sexual in nature, was arraigned with regard to advertisements of his products in a public setting that were presumed to be in violation of the California penal code; although the products that he was selling were not expressly illegal, the setting in which they existed were considered to be a violation Justice Earl Warren mandated that lewd material did not belong in a public sector.

Privacy is a state in which an individual is free to act according to their respective discretion with regard to legal or lawful behavior; however, regardless of the private sector, the adherence to legislation and legality is required with regard to the activity or expression in question

comments

View original post here:
Censorship - Censorship | Laws.com

Posted in Censorship | Comments Off on Censorship – Censorship | Laws.com

Censorship and Free Speech – jerf.org

Posted: at 1:41 am

Subsections

In the United States, we have the First Amendment of the Constitution that guarantees us certain things.

Censorship and free speech are often seen as being two sides of the same thing, censorship often defined as ``the suppression of free speech''. Perhaps there is nothing wrong with this definition, but for my purposes, I find I need better definitions. My definitions have no particular force, of course, but when grappling with problems, one must often clearly define things before one can even begin discussing the problem, let alone solving it. Thus, I will establish my own personal definitions. There is nothing necessarily wrong with the traditional definitions, but it turns out that the analysis I want to do is not possible with a fuzzy conception of what ``free speech'' is.

It's typically bad essay form to start a section with a dictionary definition, but since I want to contrast my definition with the conventional dictionary definition, it's hard to start with anything else. Free speech is defined by dictionary.com as

Since I don't want to define free speech in terms of censorship, lets remove that and put in its place what people are really afraid of.

Considering both the target of the speech and the publisher of the speech is necessary. Suppose I use an Earthlink-hosted web page to criticise a Sony-released movie. If Earthlink can suppress my speech for any reason they please (on the theory that they own the wires and the site hosting), and have no legal or ethical motivation to not suppress the speech, then in theory, all Sony would have to do is convince Earthlink it is in their best interest to remove my site. The easiest way to do that is simply cut Earthlink a check exceeding the value to Earthlink of continuing to host my page, which is a trivial amount of money to Sony. In the absence of any other considerations, most people would consider this a violation of my right to ``free speech'', even though there may be nothing actually illegal in this scenario. So if we allow the owner of the means of expression to shut down our speech for any reason they see fit, it's only a short economic step to allow the target of the expression to have undue influence, especially an age where the gap between one person's resources and one corporation's resources continues to widen.

Hence the legal concept of a common carrier, both obligated to carry speech regardless of content and legally protected from the content of that speech. The ``safe harbor'' provisions in the DMCA, which further clarified this in the case of online message transmission systems, is actually a good part of the DMCA often overlooked by people who read too much Slashdot and think all of the DMCA is bad. The temptation to hold companies like Earthlink responsible for the content of their customers arises periodically, but it's important to resist this, because there's almost no way to not abuse the corresponding power to edit their customer's content.

I also change ``opinion'' to expression, to better fit the context of this definition, and let's call this ``the right to free speech'':

Though it's not directly related to the definition of free speech, I'd like to add that we expect people to fund their expressions of free speech themselves, and the complementary expectation that nobody is obligated to fund speech they disagree with. For instance, we don't expect people to host comments that are critical about them on their own site.

By far the most important thing that this definition captures that the conventional definitions do not is the symmetry required of true free speech. Free speech is not merely defined in terms of the speakers, but also the listeners.

For structural symmetry with the Free Speech section, let's go ahead and start with the dictionary definition:

The best way to understand my definition of censoring is to consider the stereotypical example of military censorship. During World War II, when Allied soldiers wrote home from the front, all correspondence going home was run through [human] censors to remove any references that might allow someone to place where that soldier was, what that soldier was armed with, etc. The theory was that if that information was removed, it couldn't end up in the hands of the enemy, which could be detrimental to the war effort. The soldier (sender) sent the message home (receiver) via the postal service as a letter (medium). The government censors intercepted that message and modified it before sending it on. If the censor so chose, they could even completely intercept the letter and prevent anything from reaching home.

This leads me naturally to my basic definition of censorship:

There is one last thing that we must take into account, and that is the middleman. Newspapers often receive a press release, but they may process, digest, and editorialize on the basis of that press release, not simply run the press release directly. The Internet is granting astonishing new capabilities to the middlemen, in addition to making the older ways of pre-processing information even easier, and we should not label those all as censorship.

Fortunately, there is a simple criterion we can apply. Do both the sender and the receiver agree to use this information middleman? If so, then no censorship is occurring. This seems intuitive; newspapers aren't really censoring, they're just being newspapers.

You could look at this as not being censorship only as long as the middlemen are being truthful about what sort of information manipulation they are performing. You could equally well say that it is impossible to characterize how a message is being manipulated because a message is such a complicated thing once you take context into account. Basically, since this is simply a side-issue that won't gain us anything, so we leave it to the sender, receiver, and middleman to defend their best interests. It takes the agreement of all three to function, which can be removed at any time, so there is always an out.

For example, many news sites syndicate headlines and allow anybody to display them, including mine. If a news site runs two articles, one for some position and one against, and some syndication user only runs one of the stories, you might claim that distorts the meaning of the original articles taken together. Perhaps this is true, but if the original news site was worried about this occurring, perhaps those stories should not have been syndicated, or perhaps they should have been bound more tightly together, or perhaps this isn't really a distortion. Syndication implies that messages will exist in widely varying contexts.

Like anything else, there is some flex room here. The really important point is to agree that the criterion is basically correct. We can argue about the exact limits later.

So, my final definition:

Going back to the original communication model I outlined earlier, the critical difference between the two definitions becomes clear. Free speech is defined in terms of the endpoints, in terms of the rights of the senders and receivers. Censorship is defined in terms of control over the medium.

The methods of suppressing free speech and the methods of censoring are very different. Suppression of free speech tends to occur through political or legal means. Someone is thrown in jail for criticizing the government, and the police exert their power to remove the controversial content from the Internet. On the receiver's side, consider China, which is an entire country who's government has decided that there are publicly available sites on the Internet that will simply not be available to anybody in that country, such as the Wall Street Journal. Suppressing free speech does not really require a high level of technology, just a high level of vigilance, which all law enforcement requires anyhow.

Censorship, on the other hand, is taking primarily technological forms. Since messages flow on the Internet at speeds vastly surpassing any human's capabilities to understand or process, technology is being developed that attempts to censor Internet content, with generally atrocious results. (A site called Peacefire http://www.peacefire.org has been good at documenting the failures of some of the most popular censorware, as censoring software is known.) Nevertheless, the appeal of such technology to some people is such that in all likelihood, money will continue to be thrown at the problem until some vaguely reasonable method of censorship is found.

The ways of combating suppression of free speech and censorship must also differ. Censorship is primarily technological, and thus technological answers may be found to prevent censorship, though making it politically or legally unacceptable can work. Suppression of free speech, on the other hand, is primarily political and legal, and in order to truly win the battle for free speech, political and legal power will need to be brought to bear.

These definitions are crafted to fit into the modern model of communication I am using, and I have defined them precisely enough that hopefully we can recognize it when we see it, because technology-based censorship can take some truly surprising forms, which we'll see as we go.

See the article here:
Censorship and Free Speech - jerf.org

Posted in Censorship | Comments Off on Censorship and Free Speech – jerf.org

Censorship – RationalWiki

Posted: at 1:41 am

Politically, there exists only what the public knows to exist. ("Politicamente, s existe aquilo que o pblico sabe que existe.")

Censorship usually refers to the state's engaging in activities designed to suppress certain information or ideas. In the past, this has been done by burning books, jailing dissidents, and swamping people with government propaganda. In modern times, the same techniques can be used, but in places like China it is complemented with a nation-wide Internet firewall and the co-option of journalists.

More generally, the term is also used any time people in positions of power try to prevent facts or ideas embarrassing to them from coming to light. This can be done by editorial boards of periodicals and journals, by restricting what their writers can actually research or write about, or by restricting and censoring what they do write, preventing it from being published. This can be done for many reasons, including due to fairly legitimate issues of style, or topics that editors just don't think are right for their publication. This type of censorship is not (and probably should not be) illegal; to force a journal or web site to promote ideas the owners and editors find anathema would be a violation of free speech. Actual censorship, however, is usually done much more maliciously and threats (financial, legal or physical) can be made to prevent something going to publication.

One pernicious result of this "right to not publish" can result in a form of censorship wherein all "major" outlets of information are owned by large corporations, which tend to have certain interests in common, and might, as a group, make it very hard to find information critical of those interests.

Censorship can also come from a government level, and it is this that is usually considered the worst kind of censorship. While individual corporations or private ventures have a right to control the information they host, and their readers are welcome to go elsewhere for their information, governments have a hold over everybody without exception. This leads to a population at large being denied information and more often than not, forcibly fed incorrect information. It should be noted that, while citizens in most Western countries are safe against government censorship (for the most part, at least), other places have almost completely state-run media where literally no alternative exists for the public to access their information. In recent years, China has been somewhat notorious in censoring large portions of the internet from its citizens.

In modern times, due to ubiquitous channels of mass communication, a kind of censorship can be performed (intentionally or otherwise) by swamping the people with other information to hide some particular point. This form of censorship is associated with the Huxleyan flavour of dystopia (e.g. Brave New World),[1] in which pleasurable, visceral, immediate, concrete stimuli (e.g., supermodels, baby bumps, or Charlie Sheen) crowd out troubling, cerebral, long-range, abstract stimuli (e.g., global warming, nuclear safety, the epidemiological consequences of vaccination refusal).[2]

Counterprotests "shouting down" a group of people are sometimes accused of being censorship, but since they don't usually actually prevent or deny the free expression of what they are protesting, again, this is not really censorship. But the waters can get murky at times!

Also, there is the now almost time-honored way of releasing "bad" political news - do it on Friday evening, after the major news outlets have wrapped up their stories. By Monday, it's not news any more, and often gets much less attention that it might have otherwise. This was brought to light when someone mentioned that 11th September 2001 was a "good day to bury bad news".[3]

The United States has recently seen more use of this insidious form of censorship. In order to "accommodate" demonstrators at high-profile events, they are shepherded into a pre-assigned area rather being allowed their right of free assembly. These areas are usually placed well out of the media spotlight - for instance, at the 2004 Democratic Party Convention in Boston, the "free speech zone" was some distance away from the building where the convention was held - in a wasteland of construction debris and fences under a roadway that was partially dismantled.

The Bible has at times been noted as containing unsuitable content which would likely result in its censorship in some areas were it not for its religious significance. Prior to the Protestant Reformation, Bible translations into local languages were often censored or prohibited.

It is often claimed by conspiracy theorists or people attacking the Christian religion that a large number of books were rejected or suppressed from the official Bible in order to hide divine revelation or to prevent embarrassment. This is highly misleading. While there are a large number of apocryphal religious Jewish and Christian religious texts, very few of them were ever widely regarded as authentic. Of the early apocryphal works, only The Shepherd of Hermas, the Epistle of Barnabas, the Apocalypse of Peter, and the Gospel of the Hebrews ever appeared to have much currency outside of small sub-groups of Christians, and even they were considered widely controversial or noted as being "despised" by many early members of the Church. The books which today make up the New Testament are believed to have all originated in the first or second centuries CE, and the contents of those works are considered to be very well preserved, with only a few notable differences (most notably the end of the Gospel of Mark, which may have been written after the rest of the Gospel).

Many of the apocryphal religious writings were censored by the early Church; it is noted that the Apocalypse of Peter was, at one point, forbidden to be read in Church, presumably indicating that they did not consider it to be holy scripture.

One notable example of a highly successful piece of apocryphal writing was the Book of Mormon, written by Joseph Smith, founder of the Church of Latter Day Saints. It was first published in 1830, a very long time after other biblical apocrypha had been dismissed; it is universally rejected by all other Christian sects. There have been numerous other, less successful attempts at creating new Christian canon.

This varies depending on the country and local views and laws.

Many "rental" and even "on sale" videos are censored. Scenes involving nudity, especially of the male frontal variety, are usually removed. Sometimes one will see both versions on offer, with different ratings on the box. When offered as television broadcasts, similar steps are also taken, with additional editing often employed to make the film fit its time slot. This is sometimes done to lower the level of gore for a film to be broadcast at particular times. For American television in particular, bad words (which are considered worse than all-out gun-toting violence) are also bleeped, cut, or voiced over.

In some parts of continental Europe there is almost no censorship of sexual scenes. In Spain, for example, late-night free-to-air local channels may broadcast uncut hardcore pornography.

In the UK, the BBFC will not censor movies without the permission of the film's producers, but this censorship may be necessary in order to give the movie a specific rating. For example, to preserve its PG rating, Star Wars Episode II is censored to remove a headbutt that would have given the film a 12A rating if it had been left in. Similar guidelines apply for nudity and bad language.

On television, most types of nudity are usually allowed to be shown after the "watershed" of 9pm, except for shots of an erect penis, which are forbidden. Scenes of simulated sexual activity are permitted; real depictions of sex are typically not.

Censorship of books has often included an outright ban on publication. D.H. Lawrence's "Lady Chatterley's Lover" was not legally printed in the UK until 1960, for example. Its publishing was part of possibly the greatest social upheaval of the 20th century; the prosecutor asked if the book was one which "you would wish your wife or servants to read" (it used the word "cunt" - shock, horror!) This sort of censorship persists to the modern day, with the works of authors such as Judy Blume being frequently challenged.

Other censorship can occur for the less blatant but more insidious reason of marketability. The third "Hitchiker's Guide" books, Life, the Universe and Everything, was censored for the American market. Two occurrences of "Asshole" were changed to "Kneebiter," and "The Most Gratuitous Use Of The Word 'Fuck' In A Serious Screenplay." was altered to "The Most Gratuitous Use of the Word "Belgium" in a Serious Screenplay."

Producers of films also engage in two kinds of self censorship. Sometimes, just one scene or shot is all that it takes to change a film's rating. Both kinds involve paying attention to the "standards" while making the film in order to achieve the desired rating. Sometimes, a movie-maker seeks to obtain a lower rating by reducing objectionable material, possibly due to a contractual obligation to keep the film below a certain level, or simply for marketing purposes - G-rated movies have a different target audience, and PG-13 movies have historically been considered to have the largest audience demographic. Filmmakers most especially try to avoid NC-17 ratings or the local equivalent, as many theater chains will refuse to show such movies, greatly reducing their potential profitability.

In a related phenomenon, other times, a film-maker seeks to obtain a higher rating in order to promote the film's "adultness", usually to teenagers who wouldn't be caught dead paying to watch a "family friendly" movie, or simply because the audience will misunderstand what the movie is about if it gets a lower rating. A movie which might otherwise be rated G or PG might have a single instance of cursing inserted into it in order to raise its rating to PG-13, thereby presenting the film as being targeted towards its proper demographic.

Film-makers will sometimes attempt to game the system by including a scene or a line intending for it to be rejected by the producers or studio, either in order to "negotiate" down to the material that they really want to include while still pretending to be reasonable, or in order to distract the raters from other potentially objectionable material. This material occasionally is not rejected, and thus ends up in the final product, while at other times the rejected material may be used in promotional material before being cut from the final edit of the film. One example is the line "I haven't been fucked like that since grade school", from Fight Club, which was originally presented as "I want to have your abortion" as the line they could back down from, although the original line is included as a deleted scene on the Fight Club DVD. (The latter line "I want to have your abortion" was actually the original line from the book.[4])

The line between self-censorship and simple editing is not always clear-cut; people may cut out unimportant material simply because they feel it would distract or bother the audience, and thereby better present their true artistic vision or moral of the work, or simply for marketing reasons where their goal is simply to produce something to be consumed.

Lately, in several countries, a new form of censorship has been afoot. Unlike with previous forms, its promoters and practitioners not only pretend to be "committed to free speech," but also to be advocating or carrying out the censorship in the name of promoting or enforcing human rights.

Specifically, they have provided "hate speech" laws and (in some cases) special "human rights" tribunals, which function in the following manner:

This went on with little remark for many years, since the only people being convicted were neo-Nazis who advocated violence against Jews and other non-neo-Nazi groups.

That situation has changed with the designation of two new groups as "protected": Muslims and gays. Unlike race, both homosexuality and adherence to Islam are held by a significant sector of the population to be a "mutable" characteristic; homosexuality being deemed that way by proponents of reparative therapy, while adherence to Islam being indisputably so (arguably some Muslims will tell you apostasy results in capital punishment, but places with such practices are unlikely to have freedom of speech anyway). This means that, unlike in the cases of racism or anti-Semitism, much of the opposition to Islam and (to a lesser degree) homosexuality is not based in hate. Hence, prosecution of "hate speech" on these grounds is often regarded as ideological censorship.

In the U.K., the acquittal of Nick Griffin on the charge of calling Islam a "wicked vicious faith" spurred the enactment of a new hate speech law, the Racial and Religious Hatred Act 2006, specifically targeting blasphemy offensive speech on the grounds of one's religion.

In Canada, when the Western Standard magazine published the Jyllands-Posten Muhammad cartoons, a human rights complaint was brought against the magazine's publisher, Ezra Levant. Alan Borovoy, a lawyer who had helped make the human-rights laws under which the complaint was made, stated that the laws had not at all been intended to be used in such a manner.[5] The complainant, Syed Soharwardy, later withdrew it, saying he had gotten a better understanding of freedom of speech and now thought he might be abusing the laws.[6]

When certain advocacy groups are unable to convince the government to censor content that they deem offensive, those groups often establish an "advisory board." These boards then advise like-minded people to avoid certain films, books, TV shows, etc. Sometimes these groups are relatively weak, so they come off as more annoying than ominous. Others make it their mission to influence public policy. Some religious organizations, however, have gone a step further, since most religious leaders have no qualms about bullying their followers into obeying their demands.

In the early 20th century, the Catholic Church established the Legion of Decency to "advise" parishioners on which movies to avoid at the risk of condemning their immortal souls to everlasting hellfire. No, really! Catholics were told that if they watched certain movies, they were committing a cardinal sin and that they would go to hell for willfully disobeying the Church. Even future Oscar winning films weren't spared the wrath of the Legion.[7]

Other such advisory boards include:

Some people who promote censorship aren't closet totalitarians. Sometimes they're just nuts.

See the original post:
Censorship - RationalWiki

Posted in Censorship | Comments Off on Censorship – RationalWiki

Paul Allen: The Singularity Isn’t Near | MIT Technology Review

Posted: September 24, 2015 at 11:47 pm

The Singularity Summit approaches this weekend in New York. But the Microsoft cofounder and a colleague say the singularity itself is a long way off.

Futurists like Vernor Vinge and Ray Kurzweil have argued that the world is rapidly approaching a tipping point, where the accelerating pace of smarter and smarter machines will soon outrun all human capabilities. They call this tipping point the singularity, because they believe it is impossible to predict how the human future might unfold after this point. Once these machines exist, Kurzweil and Vinge claim, theyll possess a superhuman intelligence that is so incomprehensible to us that we cannot even rationally guess how our life experiences would be altered. Vinge asks us to ponder the role of humans in a world where machines are as much smarter than us as we are smarter than our pet dogs and cats. Kurzweil, who is a bit more optimistic, envisions a future in which developments in medical nanotechnology will allow us to download a copy of our individual brains into these superhuman machines, leave our bodies behind, and, in a sense, live forever. Its heady stuff.

While we suppose this kind of singularity might one day occur, we dont think it is near. In fact, we think it will be a very long time coming. Kurzweil disagrees, based on his extrapolations about the rate of relevant scientific and technical progress. He reasons that the rate of progress toward the singularity isnt just a progression of steadily increasing capability, but is in fact exponentially acceleratingwhat Kurzweil calls the Law of Accelerating Returns. He writes that:

So we wont experience 100 years of progress in the 21st centuryit will be more like 20,000 years of progress (at todays rate). The returns, such as chip speed and cost-effectiveness, also increase exponentially. Theres even exponential growth in the rate of exponential growth. Within a few decades, machine intelligence will surpass human intelligence, leading to The Singularity [1]

By working through a set of models and historical data, Kurzweil famously calculates that the singularity will arrive around 2045.

This prediction seems to us quite far-fetched. Of course, we are aware that the history of science and technology is littered with people who confidently assert that some event cant happen, only to be later proven wrongoften in spectacular fashion. We acknowledge that it is possible but highly unlikely that Kurzweil will eventually be vindicated. An adult brain is a finite thing, so its basic workings can ultimately be known through sustained human effort. But if the singularity is to arrive by 2045, it will take unforeseeable and fundamentally unpredictable breakthroughs, and not because the Law of Accelerating Returns made it the inevitable result of a specific exponential rate of progress.

Kurzweils reasoning rests on the Law of Accelerating Returns and its siblings, but these are not physical laws. They are assertions about how past rates of scientific and technical progress can predict the future rate. Therefore, like other attempts to forecast the future from the past, these laws will work until they dont. More problematically for the singularity, these kinds of extrapolations derive much of their overall exponential shape from supposing that there will be a constant supply of increasingly more powerful computing capabilities. For the Law to apply and the singularity to occur circa 2045, the advances in capability have to occur not only in a computers hardware technologies (memory, processing power, bus speed, etc.) but also in the software we create to run on these more capable computers. To achieve the singularity, it isnt enough to just run todays software faster. We would also need to build smarter and more capable software programs. Creating this kind of advanced software requires a prior scientific understanding of the foundations of human cognition, and we are just scraping the surface of this.

This prior need to understand the basic science of cognition is where the singularity is near arguments fail to persuade us. It is true that computer hardware technology can develop amazingly quickly once we have a solid scientific framework and adequate economic incentives. However, creating the software for a real singularity-level computer intelligence will require fundamental scientific progress beyond where we are today. This kind of progress is very different than the Moores Law-style evolution of computer hardware capabilities that inspired Kurzweil and Vinge. Building the complex software that would allow the singularity to happen requires us to first have a detailed scientific understanding of how the human brain works that we can use as an architectural guide, or else create it all de novo. This means not just knowing the physical structure of the brain, but also how the brain reacts and changes, and how billions of parallel neuron interactions can result in human consciousness and original thought. Getting this kind of comprehensive understanding of the brain is not impossible. If the singularity is going to occur on anything like Kurzweils timeline, though, then we absolutely require a massive acceleration of our scientific progress in understanding every facet of the human brain.

But history tells us that the process of original scientific discovery just doesnt behave this way, especially in complex areas like neuroscience, nuclear fusion, or cancer research. Overall scientific progress in understanding the brain rarely resembles an orderly, inexorable march to the truth, let alone an exponentially accelerating one. Instead, scientific advances are often irregular, with unpredictable flashes of insight punctuating the slow grind-it-out lab work of creating and testing theories that can fit with experimental observations. Truly significant conceptual breakthroughs dont arrive when predicted, and every so often new scientific paradigms sweep through the field and cause scientists to revaluate portions of what they thought they had settled. We see this in neuroscience with the discovery of long-term potentiation, the columnar organization of cortical areas, and neuroplasticity. These kinds of fundamental shifts dont support the overall Moores Law-style acceleration needed to get to the singularity on Kurzweils schedule.

The Complexity Brake

The foregoing points at a basic issue with how quickly a scientifically adequate account of human intelligence can be developed. We call this issue the complexity brake. As we go deeper and deeper in our understanding of natural systems, we typically find that we require more and more specialized knowledge to characterize them, and we are forced to continuously expand our scientific theories in more and more complex ways. Understanding the detailed mechanisms of human cognition is a task that is subject to this complexity brake. Just think about what is required to thoroughly understand the human brain at a micro level. The complexity of the brain is simply awesome. Every structure has been precisely shaped by millions of years of evolution to do a particular thing, whatever it might be. It is not like a computer, with billions of identical transistors in regular memory arrays that are controlled by a CPU with a few different elements. In the brain every individual structure and neural circuit has been individually refined by evolution and environmental factors. The closer we look at the brain, the greater the degree of neural variation we find. Understanding the neural structure of the human brain is getting harder as we learn more. Put another way, the more we learn, the more we realize there is to know, and the more we have to go back and revise our earlier understandings. We believe that one day this steady increase in complexity will endthe brain is, after all, a finite set of neurons and operates according to physical principles. But for the foreseeable future, it is the complexity brake and arrival of powerful new theories, rather than the Law of Accelerating Returns, that will govern the pace of scientific progress required to achieve the singularity.

So, while we think a fine-grained understanding of the neural structure of the brain is ultimately achievable, it has not shown itself to be the kind of area in which we can make exponentially accelerating progress. But suppose scientists make some brilliant new advance in brain scanning technology. Singularity proponents often claim that we can achieve computer intelligence just by numerically simulating the brain bottom up from a detailed neural-level picture. For example, Kurzweil predicts the development of nondestructive brain scanners that will allow us to precisely take a snapshot a persons living brain at the subneuron level. He suggests that these scanners would most likely operate from inside the brain via millions of injectable medical nanobots. But, regardless of whether nanobot-based scanning succeeds (and we arent even close to knowing if this is possible), Kurzweil essentially argues that this is the needed scientific advance that will gate the singularity: computers could exhibit human-level intelligence simply by loading the state and connectivity of each of a brains neurons inside a massive digital brain simulator, hooking up inputs and outputs, and pressing start.

However, the difficulty of building human-level software goes deeper than computationally modeling the structural connections and biology of each of our neurons. Brain duplication strategies like these presuppose that there is no fundamental issue in getting to human cognition other than having sufficient computer power and neuron structure maps to do the simulation.[2] While this may be true theoretically, it has not worked out that way in practice, because it doesnt address everything that is actually needed to build the software. For example, if we wanted to build software to simulate a birds ability to fly in various conditions, simply having a complete diagram of bird anatomy isnt sufficient. To fully simulate the flight of an actual bird, we also need to know how everything functions together. In neuroscience, there is a parallel situation. Hundreds of attempts have been made (using many different organisms) to chain together simulations of different neurons along with their chemical environment. The uniform result of these attempts is that in order to create an adequate simulation of the real ongoing neural activity of an organism, you also need a vast amount of knowledge about the functional role that these neurons play, how their connection patterns evolve, how they are structured into groups to turn raw stimuli into information, and how neural information processing ultimately affects an organisms behavior. Without this information, it has proven impossible to construct effective computer-based simulation models. Especially for the cognitive neuroscience of humans, we are not close to the requisite level of functional knowledge. Brain simulation projects underway today model only a small fraction of what neurons do and lack the detail to fully simulate what occurs in a brain. The pace of research in this area, while encouraging, hardly seems to be exponential. Again, as we learn more and more about the actual complexity of how the brain functions, the main thing we find is that the problem is actually getting harder.

The AI Approach

Singularity proponents occasionally appeal to developments in artificial intelligence (AI) as a way to get around the slow rate of overall scientific progress in bottom-up, neuroscience-based approaches to cognition. It is true that AI has had great successes in duplicating certain isolated cognitive tasks, most recently with IBMs Watson system for Jeopardy! question answering. But when we step back, we can see that overall AI-based capabilities havent been exponentially increasing either, at least when measured against the creation of a fully general human intelligence. While we have learned a great deal about how to build individual AI systems that do seemingly intelligent things, our systems have always remained brittletheir performance boundaries are rigidly set by their internal assumptions and defining algorithms, they cannot generalize, and they frequently give nonsensical answers outside of their specific focus areas. A computer program that plays excellent chess cant leverage its skill to play other games. The best medical diagnosis programs contain immensely detailed knowledge of the human body but cant deduce that a tightrope walker would have a great sense of balance.

Why has it proven so difficult for AI researchers to build human-like intelligence, even at a small scale? One answer involves the basic scientific framework that AI researchers use. As humans grow from infants to adults, they begin by acquiring a general knowledge about the world, and then continuously augment and refine this general knowledge with specific knowledge about different areas and contexts. AI researchers have typically tried to do the opposite: they have built systems with deep knowledge of narrow areas, and tried to create a more general capability by combining these systems. This strategy has not generally been successful, although Watsons performance on Jeopardy! indicates paths like this may yet have promise. The few attempts that have been made to directly create a large amount of general knowledge of the world, and then add the specialized knowledge of a domain (for example, the work of Cycorp), have also met with only limited success. And in any case, AI researchers are only just beginning to theorize about how to effectively model the complex phenomena that give human cognition its unique flexibility: uncertainty, contextual sensitivity, rules of thumb, self-reflection, and the flashes of insight that are essential to higher-level thought. Just as in neuroscience, the AI-based route to achieving singularity-level computer intelligence seems to require many more discoveries, some new Nobel-quality theories, and probably even whole new research approaches that are incommensurate with what we believe now. This kind of basic scientific progress doesnt happen on a reliable exponential growth curve. So although developments in AI might ultimately end up being the route to the singularity, again the complexity brake slows our rate of progress, and pushes the singularity considerably into the future.

The amazing intricacy of human cognition should serve as a caution to those who claim the singularity is close. Without having a scientifically deep understanding of cognition, we cant create the software that could spark the singularity. Rather than the ever-accelerating advancement predicted by Kurzweil, we believe that progress toward this understanding is fundamentally slowed by the complexity brake. Our ability to achieve this understanding, via either the AI or the neuroscience approaches, is itself a human cognitive act, arising from the unpredictable nature of human ingenuity and discovery. Progress here is deeply affected by the ways in which our brains absorb and process new information, and by the creativity of researchers in dreaming up new theories. It is also governed by the ways that we socially organize research work in these fields, and disseminate the knowledge that results. At Vulcan and at the Allen Institute for Brain Science, we are working on advanced tools to help researchers deal with this daunting complexity, and speed them in their research. Gaining a comprehensive scientific understanding of human cognition is one of the hardest problems there is. We continue to make encouraging progress. But by the end of the century, we believe, we will still be wondering if the singularity is near.

Paul G. Allen, who cofounded Microsoft in 1975, is a philanthropist and chairman of Vulcan, which invests in an array of technology, aerospace, entertainment, and sports businesses. Mark Greaves is a computer scientist who serves as Vulcans director for knowledge systems.

[1] Kurzweil, The Law of Accelerating Returns, March 2001.

[2] We are beginning to get within range of the computer power we might need to support this kind of massive brain simulation. Petaflop-class computers (such as IBMs BlueGene/P that was used in the Watson system) are now available commercially. Exaflop-class computers are currently on the drawing boards. These systems could probably deploy the raw computational capability needed to simulate the firing patterns for all of a brains neurons, though currently it happens many times more slowly than would happen in an actual brain.

UPDATE: Ray Kurzweil responds here.

See the rest here:
Paul Allen: The Singularity Isn't Near | MIT Technology Review

Posted in The Singularity | Comments Off on Paul Allen: The Singularity Isn’t Near | MIT Technology Review

SINGULARITY: a Joshua Gates, Destination Truth …

Posted: at 11:47 pm

Going to miParacon to see Josh and other paranormal personalities? Tweet your experiences and photos to @joshuagatesfans I will also be monitoring for stuff to share on the fan page once the convention is over!

Josh has been pretty quiet on social media lately. Could it be because we're going to get some news soon? Here is a clue, to your left. I won't say what/where it is or my sources, but I'll just say to "stay tuned" 🙂

The show has been met with much praise and according to Brad at the production company, viewer numbers have been good. For now, it looks like the only criticisms fans have had for the show are that they miss the ghost hunting/cryptids search elements, and they'd like the crew who follows him and helps make the show to be featured. Fans cannot deny the better quality of filming, the fact that each episode only focuses on one case, and because the episodes are less rushed, we get to see more of the destination, and humor is definitely not missing from EXU.

In the meantime, here's some news!

Be sure to follow Josh on Twitter HEREand follow this fan page on Twitter for fan interaction and exclusives HERE. Photo credit to Brandt, who you can follow on Twitter HERE

Read more from the original source:
SINGULARITY: a Joshua Gates, Destination Truth ...

Posted in The Singularity | Comments Off on SINGULARITY: a Joshua Gates, Destination Truth …

Top NSA Banner – National Security Agency

Posted: at 9:48 am

NATIONAL SECURITY AGENCY CENTRAL SECURITY SERVICE

FORT GEORGE G. MEADE, MARYLAND 20755-6000

NSA PRESS RELEASE 5 March 2012 For further information contact: NSA Public and Media Affairs, 301-688-6524

Augusta, Georgia, Mar. 5 The National Security Agency/Central Security Service officially opened the new NSA/CSS Georgia Cryptologic Center at a ribbon-cutting ceremony where officials emphasized how the $286 million complex will provide cryptologic professionals with the latest state-of-the-art tools to conduct signals intelligence operations, train the cryptologic workforce, and enable global communications.

NSA/CSS has had a presence in Georgia for over 16 years on Ft. Gordon, when only 50 people arrived to establish one of NSA's Regional Security Operations Centers.

As a testament to this rich heritage, GEN Keith B. Alexander Commander, U.S. Cyber Command, Director, NSA/Chief, CSS told the guests at the ceremony, which included federal, state, and local officials, that the NSA/CSS workforce nominated Mr. John Whitelaw for the honor of having one of the buildings in the complex dedicated in his name, because they considered him influential to the establishment and success of the mission in Georgia. In 1995 Mr. Whitelaw was named the first Deputy Director of Operations for NSA Georgia and remained in that position until his death in 2004.

"And there have been many successes here at NSA Georgia as evidenced by the fact that this site has won the Travis Trophy six times," said GEN Alexander. The Travis Trophy is an annual award presented to those whose activities have made a significant contribution to NSA/CSS's mission.

"This new facility will allow the National Security Agency to work more effectively and efficiently in protecting our homeland," said Sen. Saxby Chambliss. "It will also attract more jobs to the Augusta area. The opening of this complex means that Georgians will play an even greater role in ensuring the safety and security of our nation."

The new NSA/CSS Georgia Cryptologic Center is another step in the NSA's efforts to further evolve a cryptologic enterprise that is resilient, agile, and effective to respond to the current and future threat environment.

NSA/CSS opened a new facility in Hawaii in January 2012 and is also upgrading the cryptologic centers in Texas and Denver to make the agency's global enterprise even more seamless as it confronts the increasing challenges of the future. More information about the National Security Agency is available online at http://www.nsa.gov.

See the original post here:
Top NSA Banner - National Security Agency

Posted in NSA | Comments Off on Top NSA Banner – National Security Agency

Jitsi (Build 3132)

Posted: at 9:48 am

Unbeknownst to many people, there are a growing number of free stand-alone VoIP clients, some of which arent half bad. Today Im going to be doing an in-depth look at one of these free downloadable clients, Jitsi, which is described as an audio/video Internet phone and instant messenger that supports some of the most popular VoIP and instant messaging protocols such as SIP, Jabber, AIM/ICQ, MSN, etc

The list is extensive, but it had me at SIP and Jabber.

Jitsi, which is written mostly in Java, is a free and open source VoIP, and instant messaging application for Windows, Mac, and Linux. Its currently in alpha. Stable releases come out every so often while nightly builds are released several times a day. When appropriate, users are automatically prompted to download and install the latest build (or you can just tell it to do this all without asking).

What separates this application from others like it is the inclusion of enterprise VoIP features such as attended and blind call transfer, call recording, call encryption, conferencing, and video calls.

This version of the application looks and feels great. The main UI is simple and clean, the pop-up call handling screen is easy to use, and the instant messaging feature is handled nicely. Jitsi certainly aims to accomplish a lot. While you can almost expect a few glitches here and there, it is certainly worth trying out.

[ Relevant Sidenote: This review was conducted on a Macbook Pro. ]

As usual, I am going to do a quick walk through of how to setup OnSIP with Jitsi. A lot of these steps apply no matter which VoIP provider youre using so I noncustomers will also find this useful. Youre going to need your user credentials. They can be found in your OnSIP admin portal under users. Here is an example of the fields you will need:

Setting Up VoIP Calling

Open up Jitsi and select +Add New Account under File. You should see a screen pop up that looks like this:

Select SIP as your choice from the options provided in the Network dropdown menu, and then hit Advanced in the lower left corner.

Youll be taken to another menu with 3 parts: Account, Connection, and Presence. Account is pretty self-explanatory. Under SIP id, youll want to input your entire SIP address. Password is your SIP password, and display name can be anything you want.

Next, in Connection, input your Proxy/Domain in the field marked Registrar, and your Auth Username into the field marked Authorization name. Youll want to uncheck Configure proxy automatically if it isnt already, and type sip.onsip.com into the field labeled Proxy if you are an OnSIP customer (Port 5060). Make sure that preferred transport is UDP and that the Keep alive method is Register.

In Presence, simply check Enable presence (SIMPLE) and leave everything else unchecked.

Hit the Next button. Youll be taken to a summary page where you can go over your settings one last time before you sign in.

Go into the Jitsi preferences. You should see a screen that looks something like the image above, with a list of all your active and inactive accounts. Select Audio and make sure that the codecs (or encodings) enabled are G722, PCMU, PCMA, and telephone-event.

Setting Up XMPP

Setting up IM is even easier. Here Ill show you how to get your my.OnSIP contacts in Jitsi. Once again, select +Add New Account under File. This time, youll want to select Jabber in the Network dropdown menu, and hit Advanced in the lower left corner. Youll be taken to another menu with 3 parts: Account, Connection, and Advanced. In Account, input your my.OnSIP login credentials. Skip the Connection section since you dont need to change anything there and uncheck the three options you see in Advanced (Use ICE, Auto discover STUN/TURN servers, and Use Jitsis STUN server in case no other servers are available). Click Next at the bottom of the menu, and then Sign In on the summary page that follows.

At Junction Networks, we put each of the phones we use through a multi-step interoperability test in which we apply ~30 test cases. An example of a test case would be the following:

Test phone calls phone B

B picks up

B puts Test phone on hold

B calls phone C

C picks up

B transfers test phone to C

Call must be transferred correctly to C. B must be released correctly after the transfer. When C picks up, audio must work in both ways between test phone and C. When test phone is on hold, there is no audio between it and phone B.

Build 3132 passed our test cases with no issues.

When I first installed Jitsi a couple of months ago, there was so much static that having an intelligible conversation was impossible. Whatever the issue was, it has since been patched and resolved.

Jitsi supports G.711 as well as the G.722 wideband codec. Narrowband calls sound about as good as a regular landline call.

High definition calls with the Jitsi sound absolutely fantastic. You can get HD VoIP calls as long as the person youre on the call is also using an HD capable device. I heavily recommend using a USB headset when making calls with a soft phone on your computer to get the optimum experience. You can pick up a good headset for less than $30.

For something that costs the end user nothing, Jitsi is a surprisingly good attempt at a unified communications client. I like to think of it as a bare-bones version of Microsoft Lync that doesnt cost me $700+ to setup, and $100 per download.

The main user interface of Jitsi looks a lot like any other IM client, except that you can have a dedicated section for voice contacts in your consolidated buddy list. Clicking on what looks like a small watch face will take you to your call history. You can conveniently redial from this screen. Right next to the watch face button is a search field, which will draw from both your contacts list and your call history. This field will also act as your dialer. Start typing in any number or SIP address, and a small green handset will appear that you can click to initiate the call.

Every contact in your buddy list and call history menus can be dragged and dropped into an ongoing call. What do I mean by that? With Jitsi, every call gets its own pop up window. Its here that youll find all of your call handling options: dialpad, create a conference call, hold, mute, record, video, desktop share, transfer, etc. Dragging and dropping people from your buddy list or call history menu into an ongoing call automatically creates a conference call. This seems to work without a hitch, and youre not just limited to a 3-way conference.

The image above shows the popup window you see during each call. You can have several calls going at once (simply call another number or SIP address using the dialer field in the main Jitsi UI and any active calls you have at the time will automatically be put on hold), and each one opens up a new window. Ill very briefly go over some of the functions of interest.

Youll notice that almost everything you can do with Jitsi is laid out in a row at the bottom. At the very left is a button that looks like an old school rotary dialer. This will append a numpad to the bottom of the window so that you can interact with attendant menus, etc. Next is your conference button. This brings up a window that you can use to invite multiple people to the call at the same time.

The next three buttons are self-explanatory: hold, mute, record (you can designate which file you want to save your recordings in the Advanced section of the application preferences).

Next is the button to turn on the video. Supported video compression formats include H.263 and H.264. Ill admit that I havent spent too much time testing out video calls on Jitsi, but the few video calls I have done (on Wifi, with just the built-in iSight camera on my Macbook and H.264 selected) were better than I was expecting. No experience-ruining frame rate or picture resolution issues here. I did try doing a video call with a coworker on her Counterpath soft phone and we werent able to get it working, despite the fact that they were using the same codec. We will do more testing and Ill update this review with our findings. Also keep in mind that a lot of factors will affect the quality of your video calls, and many of the problems you or I experience may have very little to do with the application. We plan to include video calling cases as part of JN interoperability test in the near future for applicable user agents.

According to the Jitsi development roadmap, there are tentative plans to implement multi-party video conferencing in Q1 2011.

Finally, Jitsi users can easily conduct blind and attended transfers. If only one call is active, clicking on the transfer button brings up a window where you can quickly input the transfer destination and send the caller on his/her way. If you have multiple calls active, clicking on the transfer button will open up a dropdown menu that includes all your active calls so that you can quickly conduct an attended transfer. Of course you can also choose to transfer to another number as well.

Now lets talk about some of the stuff that doesnt work quite as well.

If youre a my.OnSIP user, then you might be used to having the ability to click-to-dial and IM the same contact. You dont really get the same experience with Jitsi. My.OnSIP uses XMMP for IM and OnSIP uses SIP for voice, which means that youll have to have two separate accounts, and two separate contact lists for the same group of people. It can get especially confusing if the two types of contacts for one person look exactly the same. Long story short: Remember to use your SIP account for calling and your Jabber (XMMP) account for IM.

Adding phone numbers to the voice contacts could be better streamlined. Here is what the add contact form looks like:

Youll notice that you only get to specify the contact name. It actually works fine if youre adding a SIP address. If I type jondoe@example.onsip.com into the contact name field, Jitsi will know to use that as the SIP address, and will even cut off the domain in my contact list so that only jondoe is displayed. Adding actual telephone numbers is a little annoying since the contact name field is really the what to dial field. Sure you can go back after the contact is added and rename the number to a persons name but this seems like an unnecessary step.

Since Jitsi is a project that is literally updated several times every day, I dont think a Final Thoughts section is necessarily appropriate. The application has come a long way in a very short time, and there are big plans for the coming year. We expect a lot of updates and fine-tuning.

I would recommend giving this soft phone a download if you do not already have one on your computer, or if youre completely new to VoIP and SIP and just want a way to test out IP calling. Its free so what have you got to lose?

More here:
Jitsi (Build 3132)

Posted in Jitsi | Comments Off on Jitsi (Build 3132)

Liberty Flames College Football Clubhouse – ESPN

Posted: at 9:46 am

4d

Josh Woodrum threw for 260 yards and two touchdowns, Damian King threw a touchdown pass on a trick play, and Liberty defeated Montana 31-21 on Saturday night.

12dJake Trotter

West Virginia continues to play good defense, and young receivers Shelton Gibson and Jovon Durante continue to impress in a win against Liberty.

12d

Skyler Howard threw three touchdown passes and Wendell Smallwood scored twice, leading West Virginia to a 41-17 victory over Liberty on Saturday.

15d

No more scheduling FCS opponents. That's the message West Virginia coach Dana Holgorsen is sending to his fellow FBS programs.

18d

Josh Woodrum connected with Darrin Peterson on a pair of touchdowns as Liberty defeated Delaware State 32-13 in a season opener for both teams on Saturday night.

146d

Ron Brown, the former Nebraska assistant who took a job under Bo Pelini at Youngstown State in January, is leaving the Penguins to become associate head coach and receivers coach at Liberty.

291d

Villanova comeback tops Liberty 29-22 in FCS

299d

Liberty outlasts James Madison 26-21

306d

Liberty edges Coastal Carolina 15-14 to win title

313d

Charleston Southern hangs on to tip Liberty, 38-36

320d

Abnar carries Liberty past Monmouth, 34-24

Read this article:
Liberty Flames College Football Clubhouse - ESPN

Posted in Liberty | Comments Off on Liberty Flames College Football Clubhouse – ESPN