The Crisis of Nationality and the Nation State in the 21st Century

The principle of integrity of boundaries and non-interference in the domestic affairs of states – a number of whose boundaries were redrawn to reconcile with ethnic populations – is considered to have been established by the Treaty of Westphalia. By that time most groups in Europe had ceased to be considered tribal and began to be understood as ethnic groups. The takeaway from that transition and from the results of the Treaty of Westphalia is that nations and their boundaries in Europe could generally be considered as ethnic groupings within unique boundaries.

As a result, the people living within those boundaries came to be considered as nationals of that state rather than members of any tribe or ethnic group. This can be considered the formal origin of the term nation state. It implies that its inhabitants share and are conscious of a common identity (rather than mere experience) and normally a common functional language. This implies a single national identity – normally an inheritance from a dominate tribe, group or culture. The term national is important because its national-ity defines the state’s legal jurisdiction over a person and affords that person the protection of the state. However, it is not the same as ‘citizenship’ which defines the rights and obligations that any particular national may have. Although we know this distinction particularly from Roman times, there are even today United States, British and other country nationals who are not citizens of those countries. This a historical and present precedent for individual and general rights discrimination, and also widely practiced. In brief it follows that shared identity and any other chosen characteristic can be used to accord any physically present person, inhabitant or would-be inhabitant any status as per the laws and administration of any individual state which prevail over any possible so-called international law or so-called universal values. Laws and administration of any individual state may be based on any principle, value, preference, ideology or even caprice – and as such by definition involve discrimination and inequality.

The transition from singular tribal affiliation to national affiliation allows for a unity of different ethnic groups, but it necessarily placed some restrictions on diverging interests of those ethnic groups. Nationals of the nation state were thus more than mere residents. As a result, and with some irony, the concept framework of the nation state assumes both a single national identity, and also meant that different ethnic groups within a nation state could be nationals through their self-identification in sharing core common interests, values, experience, etc. In this way there could be multi-ethnic nation states, although the viability and sustainability of this structure appears weak.

In settler societies such as the United States, Australia and Canada the hinterland population identity can be broadly considered as reconciled as a single nationality, although there are some exceptions and it has been achieved over a long time and with great effort. In urban areas the situation is different where in addition to a large population of ‘nationals’, there are three other outcomes for inhabitants: those who choose to embrace the common national identity from within their own ethnic community, those who form a new identity separate from the prevailing national identity, and those who find it possible and desirable to remain separate from the common national identity within their intact ethnic community.

For modern nation states the main challenges to their political stability have been: legacy boundaries that fail to address the interests of ethnic groups, and the rural-urban divide where large populations in urban areas do not share the same national identity as those in the hinterland. Not every country faces a significant problem of legacy ethnic boundaries, but nearly all face that of rural-urban divide. This last challenge has become greater as the world has urbanized with larger populations. It should be noted that cities are not themselves nation states, and that even city states are not the same as small nation states. Moreover, within the ‘national’ population that inhabits cities, national identity may be under attack for many reasons and in the modern world ‘national’ elites may not share a national identity.

The foregoing shows that discrimination by the state in recognizing national and citizens, in
providing the associated opportunity for habitation, refuge, personal safety (protection), social and economic opportunities, political and legal rights; and in imposing obligations on the individual is at the heart of governance and for which the willful social contract of all those physically present or inhabitants is not required. This is primarily because nationality and citizenship do not necessarily derive from location of habitation. The idea that they do so derive has become conventional wisdom in recent decades with the rise of so-called international law and movement of refugees, but it is not factual. Why not?

‘International law’ is the basis for much political discussion and for the political management of immigration, citizenship, nationality, citizenship, refugee status, and asylum. The origins of ‘international law’ pertaining to these subjects trace back principally to the time of World War II, the establishment of the United Nations and the de-colonization period when there were large involuntary and unplanned movements of people from one country to another. In order to address this crisis a system was put into place which ultimately determined that all the world’s people were entitled to a place where they were not in fear of – at least – physical harm, but which has now moved further to practically imply an entitlement to freedom from economic and social insecurity. Of course, this is an extremely open-ended and dangerous concept because it is, in principle, a step toward universal citizenship and toward the abolition of nationality. It means that any individual can make a claim on any jurisdiction for status, benefits and opportunities, and that he, his family or his place/community of origin has no primary responsibility for that. It also implies the universal freedom of movement that allows access to the individual’s unbounded claims.

Nevertheless, the practical basis of entitlements, rights and obligations is the fundamental reality of physical presence – not claims – although that is only the starting point. The United States has strayed far from its common law origins so that the basis for common law – tradition, practice and common sense – is now largely forgotten. Domicile is the key condition underlying most civil rights. It is different from residency or habitation in that it requires origin – or intention to permanently settle – in a place. Origin refers to birth and family, while intention to permanently settle is demonstrated by concrete meaningful actions. Domicile can be a partial, but not a complete basis for nationality, nationality does not necessarily guarantee the benefits of citizenship, and citizenship is not a right or a guaranteed package of benefits. Accordingly, ‘international law’ is fundamentally in conflict with accountable local self-government as manifested in the nation state. It is past time to reject ‘international law, universal citizenship, and global freedom of movement for unlimited claims anywhere as impractical and subversive, and to restore stability to human settlement.

Posted in Uncategorized | Leave a comment

Address of Scott Gibbons to the 8th General Assembly of the East Turkestan Government-in-Exile

Merhaba, Assalaamaleikum.Greetings.

Thanks to the organizing committee for this opportunity to make a few brief remarks on this auspicious occasion.

My name is Scott Gibbons. I am a graduate of Duke University, a National Resource Fellow graduate of the University of Pennsylvania, and the recipient of fellowships from the University of California at Berkeley. I am currently a political economy and geopolitical analyst at the Yonahlossee Institute. I am a longtime member of the Royal Society for Asian Affairs many of whose members traveled to Turkestan during the previous century, and of the Indian Cartographic Society which had its origin in the pioneering exploration and mapping of the trans-Himalayan region.

I wanted to give a few remarks today because of my passion for the region and deep interest in the subjects of national identity and self-determination.

I remember in 1986 when the border from Pakistan to East Turkestan opened for non-border national, I along with other intrepid and passionate people made the crossing to Tashqurghan, Kashghar, Turpan and Urumxi. At that time I recalled the pain and suffering of East Turkestanis in previous years, such as my friend David Osman, when they had to flee their homeland. But in 1986 there was so much excitement and enthusiasm everywhere, with the hope that East Turkestan could be opened to the world and would be given the opportunity to engage with the world community.

So many topics can be raised that are well known by most of you here, so I won’t repeat them. They can be explored in great detail by the noble and important personalities. I will try to provide a perhaps new and emerging context to the status of East Turkestan, and most importantly, to the people native to that region, jurisdiction or state.

We all well know about the distinct cultures and history of the region called East Turkestan, Dogru Turkestan or Azad Sherkiy Turkistan. Yet, as with the case of Tibet, Kurdistan and other lands these have not been sufficient to generate meaningful and effective international support for self-determination and statehood. We can ask why not, and if conditions have changed so that another strategy might be pursued that could be successful.

Nation states are largely modern phenomena, not necessarily nature or easily to create and maintain. International practice, I will not say international law, is very conservative in its understanding of state boundaries and legitimacy following the guidance of the Treaty of Westphalia more than 300 years ago. There is no real international law or international consensus of what constitutes states or national boundaries other than what is already in existence. That means that support for new states almost always comes after they become reality, rather than being based on any human rights, historical or functional arguments. Even the popular wisdom, especially here in America, that a “people” should have the right to self-determination, is only theoretical. We should recognize that self-determination is only achieved through power and with support from allies.

A nation state is understood to be a bounded area of jurisdiction with a people holding a common identity and shared interests. In the case of East Turkestan, a historical process of such nation building had been in progress for many decades, but was been slow, interrupted, and now is seriously undermined by demographic warfare, restriction of freedom, and repressive atrocities. Under such circumstances, a native-people’s East Turkestan nation state cannot emerge to claim its place in the world community.

I asked whether conditions have changed. Yes, they have changed dramatically. The world is waking up slowly, thanks to Gen. Rob Spalding, Dr. Anders Corr, Joseph Bosco, and others, but finally, to the nature and designs of the Chinese Communist Party and the People’s Republic of China as a world power. The most urgent needs of the hour are certainly to bring a stop to and prevent further atrocities to the native peoples of East Turkestan, because without a living people there can be no basis for claims for a nation state and recognition. Indeed, current policies and atrocities are even making it difficult to call for the basic human rights of the native East Turkestan people. Yet, the behavior, policies and history of the People’s Republic of China gives little hope of people’s rights or self-determination.

The changed world geo-political conditions force us to consider that the only pragmatic strategy to save East Turkestan and its native people will not be primarily through independent claims of nationhood, but as part of a much broader plan for the dismemberment of China into peaceful nation states that can give each other mutual respect. Only in terms of such a strategy – implemented, not for humanitarian purposes, but for self-interest on the part of the world’s free nations – can save the East Turkestan people – and indeed, the whole world.

Thank you.

Posted in Uncategorized | Leave a comment

The Time Has Come for Radical Structural Reform of the American Higher Education System

Education, and in particular higher education, is increasingly facing criticism with regard to the equitable delivery of skills to citizens that can be financially justified and practically used in the job market and in general life. Fundamentally the complaints being raised are that the cost, difficulty in gaining admission to a respected institution, and time/environment for study are increasingly restricted to smaller numbers of the privileged classes. Beyond this it is asserted that available courses of study are not adequate or do not attract the appropriate number of students and graduates.

The result is that many in the techno-managerial community insist that more foreign students are required to address national economic needs, while American students feel more and more helpless to finance their education and get jobs that will allow them to have a traditional American standard of living. It is also clear that gaps in educational attainment, financial resources and job/institutional connections are increasingly splitting American society between the haves and the have nots.

Some have suggested that token gestures such as opening satellite campuses of elite universities in non-elite zip codes, offering exchange programs between elite and non-elite institutions, etc. may be useful to start opening peoples’ minds and casting aside long held shibboleths, but those gestures are far from the revolutionary program that is needed.

Historically, higher as well as lower education had a relatively limited purpose: to educate a very small number of individuals in the higher knowledge of the society. This originally was limited to religion, so the institutions which provided this education were almost exclusively part of the religious establishment. Gradually this expanded to other subjects that were essentially secular religion, such as philosophy, art, history, politics, medicine, etc. As secular religion gained influence, a larger, but still greatly limited number were educated, mainly at private elite institutions affiliated with religious organizations or controlled by benefactors or institutions that themselves established religio-cultural objectives for the institutions.

It was mainly in the 20th Century when scientific, professional and trade subjects became more specialized and expanded so that academic inputs were needed in addition or in place of practical skills. As statism and urbanization increased faux-science subjects such as sociology, political science, psychology, etc. emerged. This caused a significant increase in the number of individuals enrolled in higher education and a great increase in state-supported higher education.

The current educational system is broadly divided into a number of sections: pre-school, K-5, junior high, senior high, vocational, junior college, college, post-graduate and professional. Yet, the philosophical underpinnings of this are strangely absent or little known, but rooted in progressivist ideology. That ideology maintained the objective of classical education to cater for the needs of institution management and elite classes with increased emphasis at the college level as universal education was introduced at the primary and secondary levels. Even into the early post-World War II period some classical education remained at the secondary level, but by the late 1960s and onward this had been replaced with contemporary subjects that sought to integrate most classes into modern American consumer and Cold War society.

Classical education was revised to basics-lite, without religion, history and ethics, at the secondary school level with the result that few students had the cultural education needed for classical education at the college level. Coupled with the reduction in college requirements for Western Civilization study in favor of pop-culture offerings beginning in the 1960s, the value of a high school education was greatly reduced so that its contemporary equivalent was the vocational school, junior college or college. As a result, there arose a demand for universal higher education, which was itself devalued. The result was a massive expansion of all qualities of college institutions to provide an almost universal coverage.

However, even at this diluted quality, differences between weak, average, above average and elite colleges were increasingly perceived. The apparent benefits of the best colleges began to support significant tuition and establishment staff increases there, regardless of the subjects studied. Less respected colleges had many of the same operational needs, but commanded little market respect except for attracting less promising students. Since the financial benefits from attending many colleges and pursuing some courses of study were not commensurate with their costs, in the 1970s and 1980s the Federal Government became involved in financing tuition that, to a large extent, was not justified by market value, and that has resulted in the current high level of graduate indebtedness.

As the Post-World War II population bulge abated, the excess higher education capacity created to provide for the goal of universal college access, particularly for less prestigious colleges, and also for many graduate programs (which were not cost-effective for native-born Americans because of alternative undergraduate-qualified professions), was at risk. Just at that time globalist policies, greatly improved communications and the increased wealth of less-developed countries presented a new education market in the form of foreign students.

Whereas earlier foreign students who attended American universities were rare and usually from well-off families, foreign students from all classes have increased greatly and are treated the same as American students with respect to admission and financial aid. In fact, foreign students are often given preference over American students in order to enhance diversity. Beyond that, most colleges receive a wide range of public subsidies, generally based on the number of students enrolled. Enabling foreign students to be a major part of the education market allows these subsidies to continue even when they should have been reduced as a result of domestic socioeconomic changes.

Among a large or even the overwhelming majority of American families there is a belief that their child should matriculate at the ‘best’ college possible – and that the financial cost will be resolved and justified by the market response after graduation, even without regard to the subject pursued. Of course, not all subjects are equally valued in the market, but most public subsidies are given to nearly all institutions and students regardless of the quality of instruction, course of study, or likelihood of success. Combined with taxpayer subsidies for foreign students this results in a great inequity of public investment. If public subsidies were re-structured to more closely relate to national economic need and the likelihood of successful application of education after graduation rather than just being based on independent individual inclination, a massive savings and benefit to the national economy would result.

For years Federal Government National Resource Fellowships have annually prioritized courses of study for their small program, and this type of prioritization could be expanded to cover all public subsidies to higher education. In contrast to Bernie Sanders’ proposal for unrestricted personal preference-based free tuition, a more justifiable policy for public subsidies based on likelihood of after-graduation success would better guide families and students to assess the realistic costs and benefits of higher education. This new policy would introduce new rules aimed at encouraging cost-effective decisions and might include elements such as:

  • considering financial aid as parental income for tax purposes
  • replacing tenure with performance-based salary and retention
  • termination of the teaching assistant system
  • separation of research and teaching positions
  • removal of all public subsidies for contracting/consulting activities of the institution/employees
  • removing tax-exempt status from public higher-education so that the full costs of services can be levied as business enterprises
  • removing eminent domain and general public-purpose privileges from public higher educational facilities so that they are treated the same as private institutions and as business enterprises
  • introducing some subsidies to national priority courses of study with minimum standards for institutions and student performance
  • restriction on overseas campuses and distance learning from United States institutions
  • restrictions on access to courses of study for foreign students
  • special program for re-orientation of non-competitive institutions, and for institutions that need to relocate and re-structure

The above policy proposals suggest higher costs and a more restricted environment for higher education. That could be true for some but not for all institutions, courses of study and students in them. Supporting those policies actions would also be taken to make market-demanded skills more attractive for students, rather than providing course offerings with no practical guidance.

One major initiative that can be taken in to fundamentally reform the control of higher education and professional accreditation by professional associations and regulatory agencies. Broadly speaking, any reasonably qualified student should be able to undertake any course of study that he wishes, but without public subsidies. This would mean that there would not be any restrictions on overall numbers of students and institutions for any course of study, except for reasonable and fair pre-qualifications. Complementary licensing and certification requirements by professional associations and regulatory agencies would also have to be thoroughly reformed to remove arbitrary restrictions and unnecessarily high standards that restrict the number of students. In addition to providing greatly opportunities to a greatly expanded labor pool, this would also break the wage-setting professional cartels and lower the cost of professional services to the nation.

Post-graduate education serves a higher percentage of foreign students than does college education. The main reason for this is that American students are able to function more effectively and in a broader range of activities than are foreign students. As a result, the income and social benefits of post-graduate education are low or even negative for them. Courses of study for American students beyond out of pocket cost will also be needed to counteract the negative long-term trend of increasing foreign student enrolment, especially in graduate and professional schools.

The policies described above cannot be fully formulated and implemented within a short time because higher education has become highly interrelated in every sector with the broader society, professions, government and the ordinary public. However, inequities and dissatisfaction are mounting within American society and have to be addressed at the fundamental level of education. Reform of higher education would be a huge enterprise – not unlike that behind the American success in World War II – and would have to be led by Government with support from the private sector as well as the general public. Since the vast majority of Americans other than the small group of entrenched elites stands to gain greatly from the increased freedom and opportunity gained this reform program should not be belong to any political party. Maybe it is time to make education great again.

Posted in Uncategorized | Leave a comment

Fire at the Margins – the Changing American Ideology

The basis for discourse and even thinking in the Western World and the United States changed fundamentally in recent decades, but what caused American ideology to change so fundamentally over such a short period of time without a huge debate? To a considerable extent the lack of debate was due to the fundamental dislocation of thought and discourse. And of course there was deep political discord surrounding many issues such as the Civil Rights Movement and the follow on Women’s, Sexual, Immigrant, Diversity, Minority, Anti-Religion and the less discussed Elite/Technical/Financial empowerment movements. However the issues raised in these movements focused mainly on the political framework and not on the nature of the society as defined in the myth-based American ideology. Over a short period of time Americans began to think that political changes did not affect the aspects of non-political society (an extension of the “didn’t matter” argument), and paid limited attention and resistance to the aftermath of political changes.


Nevertheless, the changes that came about though the political system automatically seeped into the American ideology so that it began to be understood that the current American society was the very same as that which built the nation and provided its social contract. The reason for the largely un-protested change in the American ideology is that the disruptive political movements mentioned above appeared mainly as a result of temporary and unnatural world economic and political conditions and cannot be justified or sustained in the United States or other countries as world conditions rebalance. In order to explore this assertion we should consider the nature of the temporary and unnatural world economic and political conditions that engendered those political movements.


We should understand that United States history was largely based on the incidental benefits of open space, comparative freedom and continental isolation. We can say that this determined the fundamental nature of America. The first two stages of American history were directly related to these benefits: 1. reaping benefits of windfall land and abundant resources, and 2. development and use of new technology and management organization. The third stage of is the very recent post-World War II American history with the establishment of the world’s reserve currency, American Empire and globalization. It could be suggested that we are now entering a fourth stage of population manipulation and control, but that should be explored elsewhere.


America’s good fortune was that it could jump from one historical stage to another to retain, restore and increase its affluence; and improve its social equity and remain at the cutting edge of prosperity and social development. If these stages had been absent, the various social movements such as slavery abolition, women’s rights, the labor movement, and social welfare would not have been possible. The same is true for the sexual freedom, children’s rights, immigrants rights, affirmative action, diversity programs, etc. that followed. Americans believed that these were political and not economic issues and ultimately were reconciled (or pacified) to accept them largely because there were apparently no economic costs, their social/cultural costs were blindly ignored and because there were more direct economic costs in fighting these movements (the federal government has largely promoted social activism and reform since the Civil War and weak opposition could result in jail or employment/business sanctions).


Political Rights in America

In one scene from a movie about John Adams, a leader of the American Revolution, in response to the issue of the “rights of Englishmen,” one official said, “the Crown has ordered and the only course is obedience; you would do well to accept that and act accordingly.” The official meant that it was Crown that decided what those rights were. Colonial Americans argued that the law was used to crush their rights, but the British Crown did not consider Colonial interests as rights and so used the law to deny them. Both positions were justified, but there was only one state (the Crown) and its supporters were the controlling power, so they defined the nature of Colonial rights. It was necessary to have a War of Independence in order to break the control over the part of the state that was the American Colonies. Once that was done the new American Government could define the rights of its citizens according to its own concepts. The lessons to be drawn from this episode are that interests are not rights, and that it is the state power that defines any rights.


Much American policy debate in recent years has focused on making group interests into rights and entitlements, but this process of political transition from interests and entitlements to rights has been largely ignored. One of the core issues in this debate was that of the dominant culture (the state) somehow being responsible for the lack of success or disabilities of other groups. Of course this is to some extent true by the above definition. The general nature of society is that the dominant culture enjoys a larger share of its benefits. In the United States for most of its history, the values and interests of the dominant culture were expressed and reinforced by its laws. The unique feature of American society was that subordinate cultures were also granted considerable benefits – but these benefits were special “consideration” rather than rights.


Normally this inequitable situation is accepted, although it is not liked by all. However, in the United States starting with the Whiskey Rebellion and continuing through the War of Southern Secession, statist government re-emerged from the euphoria of independence and re-established/redefined dominant American values and interests to be generally less onerous and restrictive than in the later Colonial period, but without substantially altering the power of the state to define rights, and to enforce dominant values and interests through law and executive orders.


However, there was a remarkable lack of conflict in core interests between the power elite and the middle class, which together formed the dominant culture, due in large part to the excess of resources. Whenever there was a conflict of interest state power and law was used to decide in favor of the power elite. In the case of cultural values, the state and the law largely accepted the dominant culture as its unwritten guidance. The state and laws did not always actively promote the dominant culture, but they did not prevent the community itself from enforcing its values in whatever manner necessary. State power and the law did not define national values, but promoted and ensured the dominant cultural values.


In the larger historical context there were often two political opinions with one having a larger following or greater power support. The opinion of the majority or the powerful always won. However, the United States Government was developed mostly by individuals who had at some time or place been members of some sort of minority and who sought to provide majority rule with some protection for minorities, and the exercise of judgment in implementation and debate of issues. This was never meant to negate the rule of the majority, however that was defined, only that particular minorities might be given some opportunity for public expression and private behavior. This opportunity for expression was never intended for all minorities, all public expression or even all private behavior, only that within the tolerance range of the dominant culture. It certainly never meant equality in government or society for all individuals or groups; only opportunity for an acceptable range of ideas and private behavior.


Over time and carrying forward the experience of British political economy, state power and the law in the United States assumed the primary responsibility of enforcing unwritten values so that informal social and community action was not much needed for this. In short the community gave up its active role and duty in ensuring social order on the assumption that the state and law would do this. For generations this combined formal and informal system appeared to work well because a social contract recognized dominant concepts and values.


Those misfits not sharing the dominant culture’s concepts and values either had to accept a subordinate status or relocate (physically or functionally) where there was some degree of autonomy for alternate values. This actually took place over time along with other changes such as urbanization and the technological revolution. As a result, misfits gradually began to become powerful in fulcrum locations such as cities and in professions such as communications and entertainment that did not require close co-existence with the dominant culture. As American society continued to be further politically dissected, and to be diluted through immigration a point was reached where the dominant concepts and values began to be challenged.


This presented a huge challenge to the political/legal system since the underlying concepts and values were not written and codified, but were like the English Common Law, largely assumed based on common experience and use. As a result of challenges to the dominant culture, the law began to assert its own judgment independent of historical precedent. It was in this light that Solzhenitsyn wrote that the US had become a legalistic instead of moral society. As a result, at some time starting especially in the 1960s, the law separated from the values of the dominant culture and what they considered right, further promoting divergence from common values. As a result, many issues, even minor ones, developed lives of their own to be resolved through technical legal machinations, extra-legal administrative procedures or popular sentiment.


Since law no longer had its base in the historical social contract, it could be changed, or even interpreted more or less freely; something that it had never had preparation or occasion to do. Being dislodged from the middle class social contract, the law and other tools could only be used by the power elite and those outside of the social contract, which ultimately meant the state. If the law remained only for the use and benefit of the state and its interests, rather than to enforce the social contract, then the American State became justified to use the law to reduce influence of the middle class and suppress those who would defy the power elite.


Carrying this dramatic change further into implementation, since the law is only for the benefit of the state, there is now for the first time in American history a dispute not only as to what the law means, but also as to whether it should actually be enforced, particularly in the case of immigration and federal-state powers. Since enforcement of the law has become discretionary, vestiges of the law that do not support the state interests can be discarded through court interpretation or even directly ignored. There are too many examples of this current practice to give here. The entitled, rich, famous, influential and victim-status claimants are able to avoid punishment on technicalities and lack of action by the state. At the same time, ordinary people are harassed often with the knowledge that they are unable to fight for their acknowledged rights. In this environment the state does not serve the society, but seeks to create a society to suit its political economic ambitions.

Although many social, political and economic trends were under way in the United States even from its earliest days the pace of development and change increased in the 20th Century and especially after World War II. The United States did not suffer nearly to the same extent as Europe and Asia from the destruction of the War, but World War II brought its own disruption in the United States. This was in the form of massive dislocation to urban areas from rural areas and small towns, shift from own-account business and labor to wage employment, and a disruption and consequent mixing of social groups possible unprecedented in human history. This was accompanied by a parallel unprecedented explosion of new technology and consumer affluence, largely based on the unique position of the United States as the world’s hegemonistic industrial and financial power. Together these conditions resulted in a social and cultural vacuum, intoxication and confusion as people sought to assess their identity and meaning of life. In this vacuum literally everything was thrown open for debate – and, unique in history – without any apparent or acknowledged negative economic or social consequences. Almost all changes from tradition were seen as just increases in the total social welfare.


At the same time a cultural myopia came to be characteristic of American, and to some extent of European societies in the Post World War II period. What was this myopia? Often Americans are accused of being cocooned in their own experience and not open to other cultures. This is another subject altogether, but I should note that this is in itself a certain type of propaganda which distorts actual reality. There is a fundamental difference between this excessively self-confident or self-centered character, and the myopia that I am suggesting.


This myopia is the view of the world structure that prevailed after World War II; namely, a world divided broadly into first, second and third world; second being the core communist bloc countries. The nature of the Cold War virtually froze all these groups in place with limited interaction except for various military operations, espionage, limited immigration to address labor shortages and technical assistance to the third world. Of course, each of these elements of limited interaction would create unanticipated conditions of its own, but those conditions came to have broader significance at later times. Until 30-40 years after World War II this “frozen” international structure was combined with extensive economic and technical expansion in the First World, especially the United States. The result was that most of the American day to day system and public was almost completely isolated from the rest of the world. It was as if the Second and Third worlds were on another planet. That meant that the only world that mattered was our own First World. In that framework our actions were the world – our world – and the actions of others either didn’t matter or were the desired responses to our own actions and contained within their world.


When we read the books, listen to music and watch movies from the Post-World War II period it is clear that from conservative ideologues to utopian visionaries to social reformers and finally to the self-absorbed hippie generation, nearly all Americans were basing their world view and plans on the assumption that their ideas really mattered, that their ideas were only in competition with their domestic opponents, that there was no significant cost to their ideas and that if they prevailed the results would turn out as they wished. Even the great peaceniks and one-worlders assumed that they were the prime movers and that there would be world peace and brotherhood only as a result of their ideas and efforts. As it turns out, this was not a true understanding of the world situation. It was a special cocoon view of a special period. Once that period ended, all those assumptions would have to be dropped and Americans would have to become just other people living in the world without any special privilege except for their geographic location and history. Yet, even as this world situation has changed, most Americans have not responded in a reasonable way. Even in the midst of this cataclysmic change the new idea of American Exceptionalism has emerged adding to the cacophony of the fiddling while Rome (America) burns.


Posted in Uncategorized | Leave a comment

Diversity Rights and Multiculturalism in America

To a large extent the earlier rights movements initially developed along isolated tracks through a domino effect. However, once they reached a combined critical mass, they collectively became a single large diversity rights umbrella movement (DRM) that can agglomerate almost any minority or dissident group. This has created a new American community that shares only the social contract of opposing traditional values, social structures and heritage, but sharing almost no other core objectives.


In the latter decades of the 20th Century individual groups and government advocated various diversity rights which ultimately were accepted and accommodated by other groups, institutions and private business. By the second decade of the second millennium, however, claims for diversity are almost as regularly made by institutions themselves as they have become comfortable with the new diversity power and management structure and the market benefits it provides.


Diversity rights have come to represent an ideology which considers that there are benefits to having a limitless range of social or feature (the best way to describe some) groups. This is a major departure from previous American and most human society. Today in the United States gone is the need to have common standards that all should meet. Now having special (or hybrid) characteristics is a competitive advantage. Unifying social and cultural characteristics that created institutions are now seen as a liability. This has introduced a reverse discrimination whereby so many places are set aside for diversity that the majority community (if there is any longer such a thing) must naturally itself become a minority, but a disempowered one. Recent generations embrace the new diversity values they are taught in the absence of any other American social contract. The practical situation has reached the point where extensive diversity goals now are difficult to meet due to the absolute limit of even marginal candidates, necessitating recourse to greater international recruitment that provides global rather than national diversity.


Diversity rights have quickly developed into the concept of multiculturalism over a very short span of time. Prior to the 1990s such a concept was largely unknown and irrelevant. People were what they were – and that usually was something. The various “rights” movements described in previous tracks broke down the shared characteristics of the American people and created a buffet environment where one could pick and choose any combination of behaviors and characteristics for one’s life and identity. This was further expanded by intercultural marriages or simply social breeding. So-called multiculturalism was possible because there was no longer an authentic community for most people in America, and all aspects of community could be directly and individually obtained from the consumer market. In this commercial market environment maximum social diversity would be a benefit in allowing the maximum number and type of products for expression of culture. Since all culture could in theory be broken down into innumerable pieces, no culture would be a misfit, only different combinations of pieces. But culture is not a market good, and all combinations of pieces do not make cultures.


American democracy no longer has any significant cultural context – only a shared value of diversity from any historical norm. However, diversity by definition is the lack of common culture and history, so the embrace of diversity has brought with it the loss of culture and history. Diversity as the new social structure combined with the drive for so-called equal rights and equal opportunity means constantly changing standards away from any norm or even convergence. In place of previous respect for other cultures restricted to their natural environments there is now the desire to establish additional alien cultures and behavior as part of and beneficial to American society, even though this has been presenting social problems for decades.


Such an unprecedented situation has never been experienced before in human history, even at the time of the Tower of Babel. Whereas previous multi-national empires and multi-ethnic states have existed, the lack of a dominant culture, and the fabrication of “custom” mixed cultures has not. Therefore we can infer that traditional political economy is not designed to support this structure and that it can only survive under the special conditions that support the United States today; namely a non-productive economy established by the Dollar as the world reserve currency.


Since there is no common standard to be met for success in the “new” America, increasingly various forms of tribalism and nepotism have gained acceptance. How is this so? Of course there has always been positive discrimination in favor of powerful minorities. What was unique in the American experience was the significant and institutionalized rule of the majority. It was this that was the great achievement of American democracy. Now that so-called formal negative discrimination against some weaker minority groups has largely been eliminated, there is now positive discrimination in institutions and business in favor of weak minorities in addition to the traditional minority power groups within the broader majority. Where once broad discrimination against minorities was considered as wrong, now it is acceptable and practiced. The group that is lost in this transaction is the so-called majority, or more accurately, the plurality, since the dissection of society into many minorities means that there is no longer any practical holistic majority – only individuals that cannot be classified as a minority. The natural pathway to success is to emulate the culture of the dominant powers in society. Since there is not really a single dominant power in American, it is necessary for one to choose a single minority, or multiple minorities to associate with, or as that is increasingly not possible for some, to associate with one of the transcending professional guilds that in effect make up other privileged minorities. Multiculturalism has led to diversity rights, minority preference, internal social decay and rejection of any unifying features.


Language Rights

Along with a common national border and government, one of the greatest assets of the American people was English as a common language. Even though immigrants came to the United States with different languages, all but a few small groups found it practical to adopt English as their main or only language by the 1960s. The use of one language reduced many costs and restrictions in government and business and contributed to prosperity at home and abroad (from promoting English as the international language). Then sometime after the 1960s the idea of so-called bilingual education began to surface.


It is important to understand how this took place. Throughout American history there had been various organizations that sought to help immigrants to learn English as well as to provide their own special education in their traditional languages and customs at their own expense as a supplement to standard English education. Immigrants to America had wanted to learn English and were eager and grateful for the opportunity to learn it. Then suddenly there was an issue of bilingual education and information. In the early 1970s Sen. Hayakawa of California established a movement to make English the official language of the United States (there has never been an official language in the United States), but this movement was not able to achieve its objectives and faded away after the Senator’s death.


The apparent reasons for the push for bilingual education were the increasing intrusion of government into education and all aspects of life; and the increase of immigration to the United States of indigenous and mestizo people from the Americas. Many of these people had a language other than Spanish as their first language. Others were of European stock with various cultures, and of course from the Cuban hijira many of whose members expected to return to Cuba. It could be seen, though, that the main target group was the indigenous people whose only link with any modern culture was with the Spanish Language. Until that time there had been no significant problem to learn English from any other groups.


Bilingual services were not raised as a public policy issue, rather implemented by government fiat. This was framed in terms of public convenience and efficiency initially, but soon became a rights claim. Although the initial focus was on Spanish, it soon expanded to bilingual services for all other languages as well. Over several decades immigration from Spanish speaking countries increased to the point where even those of advanced culture who would normally have acquired English as a first or second language, were encouraged to become a new privileged minority through linguistic and other social services.


Although the core issue had been that of separate supplemental linguistic services in Spanish, as with other rights claims described in earlier chapters, this claim too soon moved to the public domain as a “language” right to be given place alongside English. Since most immigrants know the value of English in the United States, this was not an action designed to support integration into American society, rather it was a limited benefit to a minority recreating the facility of its native place to be paid for by the general American society,.


This initial bilingual education policy has expanded with great speed and energy to include a drive to include Spanish as a de facto second national language. It is interesting that the core interest in bilingual education has only been in Spanish, and that Spanish is the only language of sufficient use to be positioned to significantly disrupt the American social contract. Moreover, the benefits to the United States of any cultural link with Hispanic America seem to be limited in comparison with those of the rest of the world where populations are excited about using the English Language. Moreover, the costs of providing bilingual education and support are high, while the value of Hispanic labor input to the American economy is low. Although in comparison to English and other languages Spanish has limited cultural and economic interest for the United States, there has been a continuous effort to promote the study of Spanish in the United States.


In recent years there has been a curious trend among the media elite to pronounce Spanish words, in the way a Spanish speaker would instead of the way the word is pronounced in American English. This is especially curious because this is not the same for words from other languages even closer to and with much more affinity to English such as French. If someone would pronounce Paris as Pahrri people would think it was affected and humorous, yet on National Public Radio many talking heads think nothing about saying Lattinno or Chi-lay. Why would this be so? Why would my local radio station introduce a bi-lingual classical music program? Is it really to help uneducated Spanish speakers in the US to better appreciate classical music?

As an aside, we also have to ask why, when India or Burma changes the historical names of cities or even the country, should we adopt these? Most countries use names of foreign lands that they have based on their own experience. Looking back, why did we ditch our historical names and terms for China in the 1970s? Psychologically, it is clear that the objective of linguistic rights is to de-link subjects and conversation from a shared American cultural viewpoint and establish them as independent and self-defining.


Manipulation of the Language Culture

As if the bilingual Spanish Language rights movement was not devastating enough for national communications, the entertainment, educational and entitlement service class also began to manipulate English well beyond the natural language evolution. Initially there was politicization of language for the CRM for all words which had any relationship with race. This was expanded to include words with reference to all claimants to special rights. Largely this meant the restriction on the use of words with historical or judgmental meaning. Later this was expanded to include even simple observation words that could imply preference or judgment. The second stage came with the creation of new words and terms for objectives – and supposed disabilities of – claim holders.


This movement continued with the flagrant distortion of the language in the most outrageous way by the use of homosexuals of the term gay as a positive judgment on themselves. Following this came all sorts of spin language for misfits and deviants. Normal people do not routinely need to invent such terms. Now we have reached the stage where only people who use language in a manner contrary to experience and with “political correctness,” that is; in a distorted and misleading way, may participate in discourse outside of the most private life. The beginnings of total thought control through language control can even be seen in the big brother function of the Microsoft spell check and editing functions that make it difficult to use particular sentence structures and alternative spellings.


I remember that even in the 2nd Grade one of my teachers was lecturing us kids not to use the word “nice”. Through most of my education teachers were criticizing us for the use of the subjunctive tense, and in recent years the US Government has demanded reports using “action words.” At the same time there has been a relaxation of the rules for the use of a singular verb for a plural noun or adjective, such as “there’s,” instead of “there are,” by supposedly educated persons. Even there is also a development of speech patterns that differentiates women (and some homosexuals) from men. An example of this is the statement with a question intonation at the end, implying the lack of confidence or judgment, or just a loose “devil may care” pronunciation of any word. This used to be associated with the Valley Girls mindless California female lifestyle, but has become fashionable with many women throughout the country. A good example of this was when one of my so-called colleagues in Afghanistan, with a Masters Degree from Harvard constantly talked about the need for “saells”. Most people thought she meant “cells,” but actually she meant “sales”. Still, she could not bring herself to actually pronounce the word as “sayles”. There are many other example of this juvenile fashion lingo that has entered mainstream use. The deep meaning of this is not simply sound, but attitude to the subject itself.


I will not even attempt here to explore the attempt, thus far only partially successful, to conduct war on our own culture and language by changing so-called gender and role references in language and behavior. Why should we complicate our lives by dropping the common third person singular pronoun “he” and replacing it with “he or she, him or her?”

Posted in Uncategorized | 1 Comment

Misfit and Technical Elite Empowerment in America

The fundamental argument for the women’s movement and the family revolution was the overriding imperative of independent self-expression. Yet, where in history is self-expression found as a basis for society? The power of the self-expression movement came largely from the extraordinary sense of self-importance held by the post-War generation. This broad generational sense of self-importance was a form of national hubris which was in sharp contrast to the experience of historical societies. After all, most people do not have the capacity to need freedom of self-expression independent of existing social behavior norms? All societies provide freedom for self-expression which is usually sufficient for most people since they must function within that society. For those few individuals needing or desiring more freedom, ability to function within the social structure is tenuous and they are often expelled, exiled or ostracized.


Societies generally provide structures and patterns for individuals that enable them to function efficiently so that they don’t have to recreate most knowledge, behavior and judgment, although some may see this as restricted freedom. Western societies since the Enlightenment have increasingly emphasized individualism, but for most of the period this has been a limited individualism built on the edifice of established society. However, in the United States, individualism now claims almost all of the social space, creating great inefficiencies and risking social anarchy. This is a more serious threat to the society because even persons with limited capacity for individual expression are encouraged to develop that capacity which they may not naturally have and in some cases it would be a borrowed individualism. Not only does this create stress and expose confusion in individuals, but it also undermines the foundations of society, by removing the primacy and legitimacy of social guidance and support.


Traditional American society is a framework culture of a complete society that assumes minority groups will fit within the society “and only deviate to a limited extent”. The American “myth” in hubristic contrast assumes that American society can safely allow “full freedom of expression” to alien cultures, interests and investments, and those alien cultures, interests and investments will somehow limit themselves voluntarily to a harmless space within American society where there would be no risk of distorting or destroying common and traditional American institutions and thinking. This fallacy has been promoted by globalist intellectuals and misfits concentrated in cities along with the various rights movements as a type of “unity of disunity”.


Race, sexuality, gender and other rights movements have now become internal aspects of American society based on the assumption of “unity of disunity” and limited deviation. Manipulated in the right way, these movements involve sufficient population numbers to easily split the national society into unmanageable sub-groups. This is made even more inevitable with the broader self-expression movement that further breaks up each of these increasingly conflicting rights groups. The CRM, sexual, gender and self-expression movements essentially destroyed the structures of moral (having the status of accepted authority) social and cultural leadership. With the loss of this leadership there has also been the loss of the corresponding “followship.” With the loss of leadership and followship, and hence unity, there was only one way that society could be managed – through manipulation and force of raw power.


This raw power for manipulation is different from the legitimate power that comes from society and culture. It is a statist power with the sole purpose of control and aggrandizement. It is difficult to institute statist power directly because of the resistance of at least some of the people whose freedom, status and assets may be threatened. Using classical warfare strategy the best approach is to avoid direct attack and use subterfuge. In the case of the most modern democratic states this has often been done by increasing distance between issues, deliberation and decisions, otherwise known as reduced transparency. Ironically, this has often been achieved in recent times under the guise of empowerment. This false empowerment essentially diverts the attention of the public to issues and procedures that have no meaningful impact on core decisions. Examples are the establishment of small government units for citizen participation despite the fact that fundamental decisions are only made at higher levels, and where extensive public consultation procedures are implemented only for the purpose of reporting and process rather than as actual decision making inputs.


As long as basic government and management issues were practical and understandable to the common man it was possible to organize political issues so that debate and battle could be focused and accessible. Actually there were few perennial political issues other than taxes and economic rights (although from time to time new major issues such as slavery, suffrage, war, etc. came up, but these were also mostly, on their face, simple and straightforward). The limitation on political issues was due to the limited size of and expectations from government. There were many other issues, but those by and large did not enter into the political sphere and remaining within the family, community and larger cultural group. Starting with the FDR New Deal programs, government began to take responsibility for most aspects of individual lives. This was due to the presumption by the ruling elites (or wannabe ruling elites) that individuals could not manage their own affairs because of broader forces beyond their control. Of course there is some truth to this in the face of long term growth of cultural mixing, population sizes and urban concentration. As a result, the challenges resulting from the Great Depression (a simple economic crisis) were used as an opportunity to expand government into aspects of life which had otherwise been independent of the American (or other) political economy.


Over several decades the “emergency” activities of the New Deal created bureaucracies and management structures which operated at higher strategic levels beyond the local community and required specialized experience and training. This brought new demands on colleges and universities to specialize in management and management-related areas rather than traditional and classical liberal arts. All subjects eventually became analytical, technical and managerial rather than judgmental. This is well explained by Immanuel Wallerstein in his writings on world systems analysis. The result was broad and extensive. The influence of traditional values, history and culture was excised in favor of a pseudo-scientific technical structure for most subjects. Not only did this remove the possibility to understand even common subjects from the average man, but it also created a community and culture-free class of educated “technocrats” who could migrate like carpet baggers to almost any place and work without any need to have any relationship with or values in common with the community. The range of work requiring such “technocrats” has expanded massively to the point where almost all “professional” jobs are filed by such persons. As a result of affirmative action and anti-discrimination rules, “professionalized” jobs now require national profile representation that often actually requires the jobs to be filled by individuals alien to the community who were often already similarly selected through the educational process.


As a result of this professionalization of work, city planners (of whom I may be one), lawyers, educators, social workers, medical practitioners, even engineers (trained in sub-disciplines such as so-called value engineering, etc.) could be from anywhere, even outside the United States, as long as they had the sanction of their guild or met diversity goals. It became almost impossible for the common man to question or oppose them, since they were comfortably paid and sanctioned to advance their professional work while the common man taxpayer was busy at work and home. Since the common man could generally not have access to the privileged and often arcade professional tools and assumptions he would have to question decisions only on the basis of his non-scientific subjective preferences. As a result, technocrats have become culturally and operationally separated from the regulated public. This chasm became even more pronounced when government and private management also integrated affirmative action and other unrelated agenda in their management decisions. Beyond this, decisions such as long term investment in energy and future planning have moved away from a focus on current residents (constituents) to unknown and hypothetical future beneficiaries (clients) who must be represented against the current residents and whose interests are defended by the technocrats.


The result was not only an intrusion of government into provision of services (hardly present for most of the country’s history), but ultimately also into regulation of all aspects of life. This was not just a cultural regulation such as requiring a certain number of years of schooling and teaching of certain subjects (with government chosen textbooks), but also practical regulation of implementation procedures and techniques. The best example of this practical regulation is that of zoning. One of the basic freedoms and rights of Americans is property ownership. However, ownership means freedom to use property as one wishes and for one’s personal gain. However, government in most places has long since usurped this power through zoning and management by, most often, non-local technocrats. Before one starts to imagine that property rights can only be infringed by a socialist conspiracy, it should be noted that the actual result of technocratic capitalism is the concentration of property rights in the hands of a limited number of developers/property managers who can work the regulatory system and meet government and professional standards. Hiring the architect, zoning lawyer, planner, engineer, etc. that are required for most successful zoning applications is too expensive, complicated or risky for the common man. As a result, a higher scale of business organization and operation is needed to collaborate with the government technocrats, who have in essence become development partners.


This process can be observed in all areas of the economy and aspects of society. Government, insurance, banking and professional regulation of medical charges, licenses and business operations may at first sound reasonable and beneficial. However, the real purpose and result of this regulation is control – not only by the operators of business, but by unaffected salaried (and sometimes commissioned) technocrats. This overregulation has come to mean that an individual must use a “guild” judgment and not his “personal” judgment. As a result, “personal” word or “subjective” assurance cannot be honored within the technocratic system. The direct and visible individual is no longer the holder of power, rather it is the invisible guild. As a result, the trust authority and legitimacy basis for transactions and methods – the essence of Common Law – no longer exists and is replaced by simple, hidden, supposedly objective, but often disguised power.

In the same way that Wallenstein observed in the false technocraticization of the social sciences, job standards have also been increasingly guildized to create a false standard of professional and performance quality. This has ultimately lead to a warped use of rules and law instead of practical objectives and logic. Moreover, valid discussion and conflict been suppressed and replaced by impersonal assignment of gains and losses – the so-called cost-benefit and alternatives analysis in which the advocated position almost always wins because the rules and analysis are controlled by the advocates.


It is not liberal or conservative, government or big business, or ownership, but technical, regulatory and management control that is the prime feature of this technocratic guild environment. Daniel Bell observed in his “The Coming of Post-Industrial Society,” that the Soviet Union and the United States shared more and more features of regulation and control that brought into question the practical meaning of their professed ideological differences. In the Soviet Union industry was government managed and regulated, while in the United States is mostly privately managed, but government regulated (and in India large industry was government planned and regulated, but implemented by private oligopoly business).


The key point here is that technocrats (read: outsiders and misfits) found skills and an environment to empower themselves over traditional social and political structures. This has been done from the smallest town to the largest agencies in the federal government. In Coming Apart and Bobos in Paradise some examples of the types of jobs and individuals who have claimed these opportunities are given. However, both books focused more on the top elite rather than the broader job structure. If we look more carefully at that broader structure we find that the staffing of county or town planner positions follows the same pattern at a lower level as that of Goldman Sachs alumni who float between the Department of the Treasury, boards of major universities and corporate management. That pattern is the aggregation and transfer of power and rights away from the experience of the core society and common man to a super-experiential control of ideas and the decision making process by technocrats. As this system developed the energy needed to create and empower the technocrats was achieved and the energy needed to remain a technocrat has been reduced.


As touched on in Coming Apart, the higher level technocrats have developed their own culture and survival strategy, but largely do not promote this as a valid example for non-technocrats. In Coming Apart, Charles Murray chooses to limit his assessment of this behavior. It is not just their personal and family behavior that technocrats generally do not promote and evangelize (in contrast to their political ideas, such as those discussed in the previous sections), but their ethnic, regional and traditional cultural identities. The much deeper significance of this is that the technocrats at all levels have pulled away from their origins and communities so that they no longer have a local rather than professional or class identity. This can be seen from their marriages, religion (or lack of it), adopted children, residential segregation, and general distance from the communities from who they sprang.


The consequences of this structural change in social terms are huge, but so too are the economic and political impacts. The change greatly reduces local community linkages for upward mobility, aspiration, morally uplifting guidance, and the flexibility that creates opportunities for individual genius and initiative. Thus, the conditions that made America’s great development possible before the 1970s are no longer present. Gradually increased social benefits had given away the American labor cost advantage. Then international education and aid gave away Amercian technical advantages. Technocratization of management removed the power of the common man from design. Finally only the power of the US Dollar as the international reserve currency was left, but as the world changed and become much more competitive that too is declining. Other than the consumption and social welfare benefits derived from the US Dollar as the international reserve currency technocratization may be the only structural cultural and historical product of Post-War America.


In his classic book, The Leisure Society, Thorsten Veblen spoke about the masses imitating the elite and is remembered for his idea that shielding ones skin from the sun was fashionable since it indicated freedom from manual labor until the elite discovered tennis and stylish suntans. It is too bad that that Veblen is not here today to update his ideas on status behavior. Certainly things have changed drastically since the 1960s. Prior to the CRM American society enforced some degree of behavioral and value conformity and aspiration with deference to the power of the ruling elite. That deference was essentially a type of civility.


As a result of the statist technocratic structure which has taken over in the latter part of the 20th Century with its socially and culturally alien and sometimes adversarial technocracy, social misfits and radicals not bound to communities and tradition have become better qualified as technocrats, wise men and respected elders. As a result there has been a dramatic increase in the number of homosexuals, women, recent immigrants and others in the technocratic/elite structure. The odds of this revolution occurring naturally are very low. The odd lyrics and slogans of the radicals and outsiders are now taken as reality and scripture while the Bible is distorted and defamed. The aggressive, disruptive and disrespectful have become the leaders and role models. Ultimately it is only those technocrats who have no traditional or community cultural values that are taken into the power structure, but what is the basis for their judgment and truth? The ultimate answer is their guild knowledge.


Conservatives often blame the reduction of personal liberty on “liberals,” but it is more accurate to point the finger at the socio-cultural “misfits” who have become technocrats. This class often dislikes or hates traditional and mainstream society, and wants to destroy it or at least to render it powerless to impose its values. These technocrats can be either democrats or republicans, and may be conservative or liberal on different issues. Illegal, fraudulent or ill-advised immigration has further diluted the national identity, consensus and social contract making it easier for clever misfits to be successful despite their conflict with local communities by further ensuring that there are no standards in behavior, education, history, language, and family values. This has created an environment where technocrats in all sectors can “break free” from the masses. The last bastion of freedom from technocratic rule is state’s rights which support local self-rule, but both Federal Government and United Nations supporters are continuously working to destroy this structure and freedom.


Most technocrats and technocratic organizations no longer truly “fit in” with any traditional group. Instead they use and manipulate the various groups for their interests. An interesting example of this at the political level is the election of Barack Obama as President of the United States. Obama had no local base and used various unrelated “vote banks” such as the heretofore unknown “swing” racially/culturally non-identified to win the Presidency. Indian politics has long relied on piecing together numerous unrelated “vote banks” for its elections and America has now adopted the same approach, but America’s vote banks are entitlement lobby groups rather than ethic and cultural groups as in India.


The election of Barack Obama marked a high point where American society removed itself from de Toqueville’s local government reality to live in a fantasy world of systems and procedures. The US Dollar as the international reserve currency, technology and technocracy with a highly developed system of rules and decision making techniques have alienated decisions and consequences from the mass public. The absence of any means for the common man to enter public policy discourse, or influence social and community standards due to the structure of technocratic rule is itself a type of violence for social control. It is not surprising that the final tool left to the common man for redress – physical violence (starting from corporal punishment of children to ownership of firearms to physical intimidation) – is greatly opposed by the technocrats and elites because they seek to control without effort or opposition. The statist system with its regulation and obfuscating standards claims to reduce social risks, but by the reduction of opportunity and freedom.

Posted in Uncategorized | Leave a comment

Sex, Drugs and Rock and Roll – the Luckiest Generation

The Generation Gap

Closely related to the establishment of the statist technocratic system and empowerment of a social misfit and radical class of technocrats is the emergence and power of the Baby Boom Generation in the United States. The Baby Boom Generation was closely identified with the Vietnam War, but it was the generation’s demand for freedom and leisure to enjoy sex, drugs and rock and roll that defined it.

Probably there has always been an eagerness of youth to assume the roles of adults, but historically it has been necessary to first gain the core cultural knowledge which elders have. This necessarily put some obstacles and delays in the way of previous younger generations. However, with the rapid changes in society and technology, and with increased leisure and affluence, youth in the Post-War period began to imagine that they had an independent claim on an early transition to social power. In is ironic that this claim was made against the authority of those that Tom Brokaw has called (probably with poor insight) the “greatest generation.”

In fact most of the early leaders of Rock ‘n Roll and Rock music were born prior to the end of World War II, yet it was largely the Post-War Baby Boomer generation that carried that vanguard’s initiative to assert its claim to revolution against the “system.” Because of the music, drugs and social revolution, there truly became a generation gap, which was exploited as a weakness in the previous generation, rather than as a strength of the baby boomers. As a result, the baby boomers did not have to learn the cultural information and behavior of previous generations, but by sheer force introduced their new experience which was ultimately embraced by their senior generation.

The Vietnam Experience

During the 1960s and 1970s most people thought they understood the Vietnam conflict as a struggle for control of that Vietnam between communist and anti-communist forces. People either liked or supported one side or the other with the accepted underlying reality as a common point of reference. Some said that it was a necessary war against communist influence and expansion, some said war for any purpose was wrong, some suggested it was right for the Vietnamese to oppose external powers, and others that it wasn’t any of our business. Some said that the situation was presented with lies, but that was mainly about the methods used to present and justify responsive actions (like sending American advisors rather than soldiers) rather than about questioning the essential reality of the struggle between communist and anti-communist forces. No one really suggested that American involvement was primarily a method to test military weapons, to gain influence over European powers, to provide employment to massive numbers of less educated Americans, or to achieve other domestic political objectives.

Was there any truth to any of the debating positions of the 1960s? Yes, there was probably some truth to all of them. What was the result of the whole mess? Finally – lot of spending, death, domestic strife, social change and Indochinese immigrants to the USA. We fought communist (Chinese, Russian?) influence and supported the royalists and French colonial collaborators. In the end we resettled large numbers of those, including ethnic Chinese, to the US and other countries when we lost the war. Where were the anti-war protesters then? Where were the protests against the reeducation and retributions which followed the fall of the puppet government? Were there any street protests against the excesses that followed in Cambodia.

So ultimately was there any sincerity to the protests about the war in Vietnam, or about the conditions in most other countries? Probably little. Maybe it was only that the spoiled children of the elite just wanted to enjoy their un-earned privilege and party it up (maybe the same situation was behind the Tian An Men Square demonstrations). True issues such as the absurdity of part time and limited warfare really were not really on the table then or afterwards, and in fact became part of standard American military procedure. Neither was the right of a people to national self-determination. In fact the only things that mattered was loss of life, national wealth, national disillusionment and the effect the war had on the spoiled baby boomers themselves. Matters of principle and actual policy issues did not seem to have any meaning. Since the spoiled baby boomers – under conditions of national prosperity and as a powerful demographic bubble – managed to get out of that mess on their own terms they just kept on moving without looking back or being responsible to anyone but themselves as the big winners in the lottery of history. Having emerged victorious from the 1960s, with no significant cost to themselves, the baby boomers continue to expect to have things their way with no cost to themselves and to insist on freedom to self-righteously do as they like– and to dismiss the values that matter to others.

Baby Boomer Privilege

For those of us that grew up with a vanishing historical American culture and the establishment of individual leisure activities unconstrained by morality and community, it seems that sex, drugs, and rock and roll have always been available and in the same form as we have experienced. The sexual revolution was treated in more detail in an earlier section. Historically we can see that drug use has been a feature of most human societies. However, drug use (broadly defined including alcohol) has been mainly among those who could afford the purchase or harvesting cost, and among those who could afford the use (capacity and time) cost. Very few societies could afford drug use by large numbers of its members or for large periods of time (other than festivals) simply because of the cost, and loss of time and ability.

In the United States after WWII the affluence created by the Breton Woods world financial system allowed so much excess wealth and productive capacity that a whole generation was free for leisure until their 30s or later, with no responsibilities, moral obligations or fear of starvation. Some of this leisure was devoted to recreational drug use, but to meet the huge demand to fill time, the music industry emerged in earnest with rock ‘n roll.

Before rock and roll most lyrics were simple and expressed common experience and feelings. Since these were common most everyone could play the tune on an instrument or sing, so there was limited scope for commercialization. Since these were shared experiences there was little unique art in the music either, and listeners were rarely shocked although they were generally entertained and amused.

While there was nascent music production earlier in the 20th Century mainly with Big Band and Swing music, it was in the 1950s when the music industry really began to develop. It appears to have had two main streams that converged and diverged throughout the 20th Century: folk and rock/pop. Folk music had long had a rich tradition in the United States, particularly in Appalachia, Louisiana and the Mexican border areas. Most attention was placed on White music from Appalachia and Negro music from the American South. Early pioneers in musicology were mainly interested in White music, but by the 1950s a lot of Negro music began to capture attention as well. Negro music became increasingly popular and traveled to the British Isles where it was a tremendous influence on the development of rock ‘n rock and later rock music. In fact rock music was to some extent a synthesis of the marshal music of the British Isles and blues (not so much with jazz). As such it was not so much a deviation from existing styles of music, but a combination with new technology (amplification and instruments) and resources.

In the early stages folk music of all types and Negro music was promoted. Negro music developed more distinctly into soul music and into pop music with white performers. The Negro audience was not affluent or large enough for an industry, so promoters created a new morality for this music as a tool in breaking down racial and behavior restrictions in society, as if restrictions were wrong because they restricted. The overall marketing approach for popular music was class and racial mixing, and the relaxation of codes of behavior, expression of rebellion and sexual freedom. This process reached its zenith in recent years when soul and pop ultimately developed into rap with its anti-social and criminal themes. In contrast, punk music was more of a White music with similar themes, but shorter life.

Rock ‘n Roll could be seen as ultimately developing into Rock music and leaving Negro music behind except for the limited impact of Funk music. The Beatles could be considered as rock ‘n Roll in their early period and rock/pop in their later period. In the early Rock ‘n Roll period, they, Elvis Pressley and others attracted cult followings in addition to groupies. The cult followings displayed odd behavior such as fainting and other loss of control, such as had been earlier experienced in religious events. The music of the Beatles and other performers had been accused of having imbedded Satanic messages and otherwise creating social disturbances. As this music became more mainstream most of the earlier odd behavior disappeared.

Proponents of Rock music encouraged recreational drug use for its best enjoyment. Drug use, psychedelic art and rock music were inextricably connected especially during the 1960s and 1970s. A huge number of the baby boom generation wasted countless months and years intoxicated with drugs, art and rock music. Drugs and related art, too, became mainstream and the most egregious excesses of users began to be tempered.

It is interesting that in the early stages some objections to these phenomena were raised from parents, but after a short time, as with objections to other rights movements, most objections ceased. By the end of the 20th Century use of rock music and its often morally objectionable language had become widespread in commercial marketing, and there was an absence of significant social objections to it.

The early promotion of folk music waned after the 1960s, but began to re-emerge in the 1980s as part of world music. This was another technique to challenge the integrity of traditional American culture through its music by introducing a fusion style, springing from no authentic experience. The best example of this I can think of is a Negro country music group with a Hispanic singer performing salsa-esque songs at Merle Fest in North Carolina, or Linda Ronstadt performing with a Mexican Mariachi band at the same event.

This overview of popular music shows that it was used to dilute and distort existing cultural patterns. In so doing, it created a faux-culture, although it was not a true reality, since it was not related to anything other than drugs and leisure activity. Ultimately technology allowed any form, combination and amount of music to be consumed at any time, or indeed, at all times. Of course now with the Internet, file sharing and digital music devices the cost has been greatly reduced, but the amount of money, energy, creativity and productive capacity devoted to music and visual entertainment since the advent of Rock ‘n Roll is mind boggling, especially when we understand that it is, in and of itself, not productive, but leisure. Moreover, this may be socially counterproductive and incapacitating leisure. How could our society afford this? Is this a social investment that will bring benefits in the generations to come. Do we benefit when we adopt slogans from social misfits (modern musicians) in place of those from direct experience, or from ancient saints and sages?

Posted in Uncategorized | Leave a comment