DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Specification
The disclosure is objected to because of the following informalities: typo; Paragraph 62, “skill in the,” should read “skill in the art,”.
Appropriate correction is required.
Claim Objections
Claims 1, 2, 7, 10, 15 and 19 objected to because of the following informalities:
Claim 1, “method comprising the steps” should read “method comprising steps”
Claims 2 and 10, “the natural rhythm” should read “a natural rhythm”
Claims 7 and 15, “the response with voice” should read “the response with the voice”
Claims 7 and 15, “the user” should read “a user”
Claim 19, “to facilitate the delivery of the curriculum content” should read “to facilitate delivery of the curriculum content”.
Appropriate correction is required.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 4, 8, 16 and 19 rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 4 recites the limitation "the step of generating one or more holographic images comprising" in line 1. There is insufficient antecedent basis for this limitation in the claim. This is because it is unclear if the generating one or more holographic images refers to the same holographic image(s) as before in claim 1 or a new instance of holographic image(s). If this is the same instance, it should read, “the step of generating the one or more holographic images comprising”.
Claim 8 and 16 recites the limitation "the virtual realistic avatars" in line 3 of each claim. There is insufficient antecedent basis for this limitation in the claim. This is because it is unclear which virtual realistic avatars are being referred to since there is no previous mention of such in the parent claims.
Claim 19 recites the limitation "the curriculum content" in line 3. There is insufficient antecedent basis for this limitation in the claim. This is because it is unclear which curriculum content is being referred to since there is no previous mention of such in the parent claims.
Note. Most likely these claims depend on some dependent claim or are missing elements.
In order to fix this issue, dependency should be reviewed and any first instance of an element
should be made clear that it’s a first instance and should be referred to as “a” or “an” instead of
“the”, and if multiple instances exist, further instances should be further distinguished for example by saying “first”, “second”, and/or “third” etc.
Claim Rejections - 35 USC § 103
Claim(s) 1, 4-7, 9, 12-15, 17-18 and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Yang et al. (U.S. Patent Application Publication No. 2021/0225186), hereinafter referenced as Yang in view of Sun et al. (Pre-Avatar: An Automatic Presentation Generation Framework Leveraging Talking Avatar), hereinafter referenced as Sun and Ritchey et al. (U.S. Patent Application Publication No. 2023/0308605), hereinafter referenced as Ritchey.
Regarding claim 1, Yang teaches a method of providing internet-based teaching utilizing a holographic projection, said method comprising the steps of: (abstract teaches "distance dedicated teaching system includes...holographic display module and a teaching service module" and claim 4 teaches "a wireless router and a 5G wide-area internet protocol (IP) network link by control"); this shows the distance dedicated teaching system would be over the internet and is a method using holographic module/projection; receiving, by a processor, a query from a user device, the query corresponding to a curriculum content comprising one of text, multimedia, assessments, interactive learning modules, discussion forums, virtual labs, case studies, webinars, and online workshops (paragraph 103 teaches "According to an input type of the interaction, various operation commands for interaction with the holographic image are generated to realize the interaction of teachers and students with environment and the teaching resources.", claim 4 (6.3.3) teaches "synchronous online teaching between one lecturing classroom and multiple listening classrooms, or supports the synchronous online teaching between multiple lecturing classrooms and multiple listening classrooms.", and paragraph 28 teaches "environment and teaching resources...virtual teaching scenes are outputted as holographic resources using a Unity engine"); teaching resources act as curriculum content since both refer to materials and information used to teach, input of interaction shows query from user device, the multiple listening rooms show multimedia and one of ordinary skill in the art would understand unity engine (thus receiving) is done using processor, and that teaching resources also include text, assessments, interactive learning modules, discussion forums, virtual labs, case studies, webinars, and online workshops;
However, Yang fails to teach processing, by said processor, the query for generating a response from the curriculum content; generating, by said processor, one or more holographic images of a real-life professor; generating, by said processor, a voice corresponding to the real-life professor; integrating, by said processor, the response with the voice and the one or more holographic images; and transmitting, by said processor, the response to said user device such that the one or more holographic images emulate a facial expression of the real-life professor while producing a human-like speech with the voice on said user device.
However, Sun teaches processing, by said processor, the query for generating a response from the curriculum content (Sun, abstract teaches "users only need to replace slides
with different notes to generate another new video"); the video and avatar associated with such is the response from the curriculum content and query here is replacing slides which would be done by user input; generating, by said processor, one or more holographic images of a real-life professor (Sun, page 1, left hand column (LHC), last line and right hand column (RHC), first two lines teach "task of talking face is to generate an avatar of the target speaker using a frontal photo of the target speaker and a driving video of an arbitrary person"); this shows avatar/holographic image(s) [holographic definition here is consistent with applicant's disclosure paragraph 52 stating "real-time interaction between users/students and virtual avatars i.e., holographic images"] are of real-life professor since page 4, RHC mentions "system provides experience for 20 people, including corporate executives, intelligent customer service, online education teachers"; generating, by said processor, a voice corresponding to the real-life professor (Sun, abstract teaches "The system firstly clones the target speaker’s voice"); cloned voice of target speaker is generated voice corresponding to real-life professor/aforementioned online educator teacher in instances where online educator is the target speaker; integrating, by said processor, the response with the voice and the one or more holographic images (Sun, abstract teaches "and then generates the speech, and finally generate an avatar with appropriate lip and head movements"); speech is the voice, avatar with appropriate movements shows it being integrated with the holographic image(s), and the video/response has the voice/speech as well as avatar/holographic image(s). Sun is considered to be analogous art because it is reasonably pertinent to the problem faced by the inventor of generating holographic images alongside voice to for a learning environment. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Yang's invention with the holographic image and corresponding voice generation techniques of Sun to lower the production and reproduction costs when preparing the communication materials (Sun, abstract). This would ensure more efficient generations of the holographic images.
However, the combination of Yang and Sun fails to explicitly teach and transmitting, by said processor, the response to said user device such that the one or more holographic images emulate a facial expression of the real-life professor while producing a human-like speech with the voice on said user device; receiving, processing, generating, integrating, and transmitting by said processor (although one of ordinary skill in the art would understand a processor is capable of performing these functions and is typically responsible for doing such in an application/system).
However, Ritchey explicitly teaches and transmitting, by said processor, the response to said user device such that the one or more holographic images emulate a facial expression of the real-life professor while producing a human-like speech with the voice on said user device (Ritchey, paragraph 123 teaches "system analyzes the brain expressions and animates a user’s prerecorded facial expressions 297 draws from a database of prerecorded 3D a catalog of facial expressions from of the person talking and provides a mimicked simulation of a subscriber based on the subscribers words and/or brain activity to generate a holographic simulation that mimics the facial or bodily response which is transmitted to a remote user of in a remote location that is viewed a generated holographic projection PDA"); holographic simulation mimicking facial response shows holographic images emulate a facial expression (is of person talking thus the real-life professor from Sun which produces human-like speech with voice), and transmitted to remote user shows transmitting by processor the response of such to said user device; receiving, processing, generating, integrating, and transmitting by said processor (Ritchey, paragraph 101 teaches "retrieval system of cognitive memory wherein an auto-associative artificial intelligence processor with neural network operates using techniques for pre-processing a query pattern to establish relationship between a query pattern and sought stored pattern, to locate sought pattern, and to retrieve"); this shows receiving/retrieving done by the processor, processing/pre-processing, integrating/establishing a relationship and one of ordinary skill in the art would understand the generation is also done by processors alongside the transmitting when retrieving. Ritchey is considered to be analogous art because it is reasonably pertinent to the problem faced by the inventor of emulating and mimicking facial expression of real-life person. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of Yang and Sun with the emulation of facial expression techniques of Ritchey to improve image quality (Ritchey, paragraph 96) and improve results over time (Ritchey, paragraph 101). This would be done by emulating facial expressions accurately and keeping a catalog of them.
Regarding claim 4, the combination of Yang, Sun and Ritchey teaches the step of generating one or more holographic images comprising: capturing, by said processor, images of a plurality of real-life professors (Yang, paragraph 20 teaches "collecting diversified teaching behaviors and a holographic image of a lecturer in a lecturing classroom in real time" and paragraph 172 teaches "supports the synchronous online teaching between multiple lecturing classrooms and multiple listening classrooms."); holographic image of a lecturer in realtime shows a real-life professor image being captured (as step of generating holographic image) and multiple lecturing classrooms would mean it's images for a plurality of real-life professors; and generating, by said processor, virtual realistic avatars of each real-life professor of the plurality of real-life professors (Sun, abstract teaches "system firstly
clones the target speaker’s voice, and then generates the speech, and finally generate an avatar with appropriate lip and head movements" and page 4, RHC, last paragraph teaches "a comprehensive evaluation of the portability and reusability of the system is carried out. Specifically, the system provides experience for 20 people, including corporate executives, intelligent customer service, online education teachers"); 20 people including online education
teachers shows plurality of real-life professors and them reusing the system ensures realistic avatar generation of each. The same motivations used in claim 1 apply here in claim 4.
Regarding claim 5, the combination of Yang, Sun and Ritchey teaches further comprising generating, by said processor, the voice for each real-life professor of the plurality of real-life professors (Sun, abstract teaches "system firstly clones the target speaker’s voice" and page 4, RHC, last paragraph teaches "a comprehensive evaluation of the portability and reusability of the system is carried out. Specifically, the system provides experience for 20 people, including corporate executives, intelligent customer service, online education teachers"); 20 people including online education teachers shows plurality of real-life professors and cloning target speaker's voice shows generating voice for each real-life professor of the plurality. The same motivations used in claim 1 apply here in claim 5.
Regarding claim 6, the combination of Yang, Sun and Ritchey teaches further comprising capturing, by said processor, voice samples of each real-life professor from pre-recorded speech segments for generating the voice corresponding to the response (Sun, abstract teaches "a system called Pre-Avatar, generating a presentation video with a talking face of a target speaker with 1 front-face photo and a 3-minute voice recording."); 3 minute voice recording shows voice sample from pre-recorded speech segment for generating the aforementioned voice corresponding to the response and when multiple users/educators reuse the system as aforementioned, then this would occur for each real-life professor. The same motivations used in claim 1 apply here in claim 6.
Regarding claim 7, the combination of Yang, Sun and Ritchey teaches further comprising capturing, by said processor, a feedback of the user of said user device upon transmitting the response with voice and the one or more holographic images (Ritchey, paragraph 101 teaches "smart interface may be operated by at least one the user, host computer, or a remote user or agent to command and control said support apparatus and prompting at least one interactive audio, image, or audio and visual presentation feedback of at least one local, live, stored, and remote content transmitted to the apparatus in order to interact with said user’s environment or a remote environment."); presentation feedback is of the user from user device and remote content transmitted shows this is upon transmitting the response with voice and the holographic image(s). The same motivations used in claim 1 apply here in claim 7.
Regarding claim 9, Yang teaches a system for providing internet-based teaching utilizing a holographic projection, said system comprising: (abstract teaches "distance dedicated teaching system includes...holographic display module and a teaching service module" and claim 4 teacher "a wireless router and a 5G wide-area internet protocol (IP) network link by control"); this shows the distance dedicated teaching system would be over the internet and is a method using holographic module/projection; receive a query from a user device (paragraph 103 teaches "According to an input type of the interaction, various operation commands for interaction with the holographic image are generated to realize the interaction of teachers and students with environment and the teaching resources."); input of interaction shows query from user device;
However, Yang fails to teach explicitly teach a processor; and a memory coupled to said processor, wherein said memory stores program instructions executed by said processor, to: process the query for generating a response; generate one or more holographic images of a real-life professor; generate a voice corresponding to the real-life professor; integrating the response with the voice and the one or more holographic images; and transmit the response to said user device such that the one or more holographic images emulate a facial expression of the real-life professor while producing a human-like speech with the voice on said user device.
However, Sun teaches process the query for generating a response (Sun, abstract teaches "users only need to replace slides with different notes to generate another new video"); query here is replacing slides which would be done by user input and the video and avatar associated with such is the response; generate one or more holographic images of a real-life professor (Sun, page 1, left hand column (LHC), last line and right hand column (RHC), first two lines teach "task of talking face is to generate an avatar of the target speaker using a frontal photo of the target speaker and a driving video of an arbitrary person"); this shows avatar/holographic image(s) [holographic definition here is consistent with applicant's disclosure paragraph 52 stating "real-time interaction between users/students and virtual avatars i.e., holographic images"] are of real-life professor since page 4, RHC mentions "system provides experience for 20 people, including corporate executives, intelligent customer service, online education teachers"; generate a voice corresponding to the real-life professor (Sun, abstract teaches "The system firstly clones the target speaker’s voice"); cloned voice of target speaker is generated voice corresponding to real-life professor/aforementioned online educator teacher in instances where online educator is the target speaker; integrating the response with the voice and the one or more holographic images (Sun, abstract teaches "and then generates the speech,
and finally generate an avatar with appropriate lip and head movements"); speech is the voice, avatar with appropriate movements shows it being integrated with the holographic image(s), and the video/response has the voice/speech as well as avatar/holographic image(s). The same motivations used in claim 1 for Sun apply here in claim 9.
However, the combination of Yang and Sun fails to teach a processor; and a memory coupled to said processor, wherein said memory stores program instructions executed by said processor, to: and transmit the response to said user device such that the one or more holographic images emulate a facial expression of the real-life professor while producing a human-like speech with the voice on said user device.
However, Ritchey teaches a processor; and a memory coupled to said processor, (Ritchey, paragraph 101 teaches "retrieval system of cognitive memory wherein an auto-associative artificial intelligence processor with neural network operates using techniques for pre-processing a query pattern to establish relationship between a query pattern and sought stored pattern, to locate sought pattern, and to retrieve");
this shows a processor and memory coupled as well as receiving/retrieving done by the processor, processing/pre-processing, integrating/establishing a relationship and one of ordinary skill in the art would understand the generation is also done by processors alongside the transmitting when retrieving; wherein said memory stores program instructions executed by said processor, to: (Ritchey, paragraph 49 teaches “programmable hardware, such as by performing a reception of or a transmission of one or more instructions in relation to one or more operations described herein”, paragraph 90 teaches “coded instructions that are stored permanently in read-only memory”, and paragraph 101 “memory wherein an auto-associative artificial intelligence processor with neural network”); this shows memory storing program instructions which is executed by processor; and transmit the response to said user device such that the one or more holographic images emulate a facial expression of the real-life professor while producing a human-like speech with the voice on said user device (“Ritchey, paragraph 123 teaches "system analyzes the brain expressions and animates a user’s prerecorded facial expressions 297 draws from a database of prerecorded 3D a catalog of facial expressions from of the person talking and provides a mimicked simulation of a subscriber based on the subscribers words and/or brain activity to generate a holographic simulation that mimics the facial or bodily response which is transmitted to a remote user of in a remote location that is viewed a generated holographic projection PDA"); holographic simulation mimicking facial response shows holographic images emulate a facial expression (is of person talking thus the real-life professor from Sun which produces human-like speech with voice), and transmitted to remote user shows transmitting by processor the response of such to said user device. The same motivations used for Ritchey in claim 1 apply here in claim 9.
Regarding claim 12, the system claim 12 recites similar limitations as method claim 4, and thus is rejected under similar rationale.
Regarding claim 13, the system claim 13 recites similar limitations as method claim 5, and thus is rejected under similar rationale.
Regarding claim 14, the system claim 14 recites similar limitations as method claim 6, and thus is rejected under similar rationale.
Regarding claim 15, the system claim 15 recites similar limitations as method claim 7, and thus is rejected under similar rationale.
Regarding claim 17, the combination of Yang, Sun and Ritchey teaches wherein the query received from said user device corresponds to a curriculum content (Yang, paragraph 103 teaches "According to an input type of the interaction, various operation commands for interaction with the holographic image are generated to realize the interaction of teachers and students with environment and the teaching resources."); teaching resources act as curriculum content since both refer to materials and information used to teach and this corresponds to the aforementioned query.
Regarding claim 18, the combination of Yang, Sun and Ritchey teaches wherein the curriculum content comprises one of text, multimedia, assessments, interactive learning modules, discussion forums, virtual labs, case studies, webinars, and online workshops (Yang, claim 4 (6.3.3) teaches "synchronous online teaching between one lecturing classroom and multiple listening classrooms, or supports the synchronous online teaching between multiple lecturing classrooms and multiple listening classrooms"); the multiple listening rooms show multimedia and one of ordinary skill in the art would understand teaching resources (aforementioned in claim 17 as curriculum content) also include text, assessments, interactive learning modules, discussion forums, virtual labs, case studies, webinars, and online workshops.
Regarding claim 20, the non-transitory, computer-readable medium claim 20 recites similar limitations as method claim 1, and thus is rejected under similar rationale. In addition, Ritchey teaches non-transitory, computer-readable medium storing instructions that, when executed by a computer system (Ritchey, fig. 11 shows hard drive as part of system which would act as non-transitory computer readable medium, paragraph 49 teaches “programmable hardware, such as by performing a reception of or a transmission of one or more instructions in relation to one or more operations described herein”, paragraph 90 teaches “coded instructions that are stored permanently in read-only memory”, and paragraph 101 “memory wherein an auto-associative artificial intelligence processor with neural network); this shows non-transitory, computer-readable medium/tangible hard drive would have instructions stored on it to be executed by computer to perform tasks. The same motivations used in claim 1 apply here in claim 20.
Claim(s) 2 and 10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Yang in view of Sun and Ritchey as applied to claims 1 and 9 above, and further in view of Kunizawa (U.S. Patent No. 4,964,167), hereinafter referenced as Kunizawa.
Regarding claim 2, the combination of Yang, Sun and Ritchey fails to teach further comprising producing, by said processor, the human-like speech by simulating the natural rhythm, intonation and pronunciation of a human voice of the real-life professor.
However, Kunizawa teaches further comprising producing, by said processor, the human-like speech by simulating the natural rhythm, intonation and pronunciation of a human voice of the real-life professor (Kunizawa, col. 1, lines 59-64 teach "composing process prepares pronouncing voices from such text data as a character array of a word or the like text (the array being employed as phonetic information) and such rhythm information as accentuation, intonation phonetic length, and the like of the text" and col. 1, lines 67-68 and col.2, lines 1-2 teach "text composing process, which may be regarded as an ultimate aspect of the voice composing system that has stepped even into intellectual faculties of human voice"); rhythm information here shows the natural rhythm and stepping into intellectual faculties of human voice shows human-like speech as well as the pronunciation and intonation being of a human voice which would be of real-life professor from above when viewed in combination. Kunizawa is considered to be analogous art because it is reasonably pertinent to the problem faced by the inventor of producing human-like speech using rhythm, intonation and pronunciation. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of Yang, Sun and Ritchey with the human-like speech techniques of Kunizawa to improve tone quality of reproduced voice of the word (Kunizawa, col. 7, lines 41-42). This would be done using rhythm, intonation and pronunciation
Regarding claim 10, the system claim 10 recites similar limitations as method claim 2, and thus is rejected under similar rationale.
Claim(s) 3 and 11 is/are rejected under 35 U.S.C. 103 as being unpatentable over Yang in view of Sun and Ritchey as applied to claims 1 and 9 above, and further in view of Schwartz (U.S. Patent Application Publication No. 2003/0037063), hereinafter referenced as Schwartz.
Regarding claim 3, the combination of Yang, Sun and Ritchey fails to teach the step of processing the query comprising processing the query using an Artificial Neural Networks (ANN) and a Fuzzy Logic (FL).
However, Schwartz teaches the step of processing the query comprising processing the query using an Artificial Neural Networks (ANN) and a Fuzzy Logic (FL) (Schwartz, paragraph 72 teaches "a hybrid of fuzzy logic and neural networks, or combination of the two, and thus may have the advantages of both fuzzy logic systems and ANN systems" and paragraph 95 teaches "the fuzzy logic, ANN or AFLRA software 322, or a separate application feeding pre-processed input into the software may process query data"); this shows processing query using ANN and FL. Schwartz is considered to be analogous art because it is reasonably pertinent to the problem faced by the inventor of processing query using fuzzy logic and ANN. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of Yang, Sun and Ritchey with the ANN and FL techniques of Schwartz to have the advantages of both fuzzy and ANN systems, i.e., they have the ability to learn and adapt, and knowledge can be discovered from the system (Schwartz, paragraph 72). This ensures a more robust invention.
Regarding claim 11, the system claim 11 recites similar limitations as method claim 3, and thus is rejected under similar rationale.
Claim(s) 8 and 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Yang in view of Sun and Ritchey as applied to claims 7 and 15 above, and further in view of Hardi (U.S. Patent Application Publication No. 2019/0079783), hereinafter referenced as Hardi.
Regarding claim 8, the combination of Yang, Sun and Ritchey fails explicitly to teach further comprising modifying, by said processor, the one or more holographic images by changing movement of one of eyebrows, lips and body parts of the virtual realistic avatars in response to the feedback captured.
However, Hardi explicitly teaches further comprising modifying, by said processor, the one or more holographic images by changing movement of one of eyebrows, lips and body parts of the virtual realistic avatars in response to the feedback captured (Hardi, paragraph 86 teaches "AI avatar can simulate interaction with the user by being provided with real time audio form the environment and provide feedback via facial movement and lip movement"); real-time audio here is feedback captured from user and the holographic/avatar image having facial and lip movement shows that all three of eyebrows, lips and body parts of virtual realistic avatar move in response to that feedback. Hardi is considered to be analogous art because it is reasonably pertinent to the problem faced by the inventor of holographic/avatar images being modified by movement of avatars in response to user feedback. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of Yang, Sun and Ritchey with the feedback and movement techniques of Hardi so as to better mimic the immersive experience of dealing with a person rather than a machine (Hardi, paragraph 86). This ensures higher engagement for users.
Regarding claim 16, the system claim 16 recites similar limitations as method claim 8, and thus is rejected under similar rationale.
Claim(s) 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Yang in view of Sun and Ritchey as applied to claim 9 above, and further in view of Kruger et al. (U.S. Patent No. 11,050,854), hereinafter referenced as Kruger.
Regarding claim 19, the combination of Yang, Sun and Ritchey fails to teach wherein the processor executes the program instructions to integrate with an online learning management system (LMS) to facilitate the delivery of the curriculum content including educational courses, training programs, learning and development initiatives.
However, Kruger teaches wherein the processor executes the program instructions to integrate with an online learning management system (LMS) to facilitate the delivery of the curriculum content including educational courses, training programs, learning and development initiatives (Kruger, col. 7, lines 37-41 teach "integrated application may form part of a module of a learning management system that administrates, documents, tracks, reports, automates, and delivers educational courses, training programs, and learning and development programs"); this shows the instructions from above used to integrate LMS for curriculum content delivery which includes the types of learnings listed. Kruger is considered to be analogous art because it is reasonably pertinent to the problem faced by the inventor of integrating an online LMS. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of Yang, Sun and Ritchey with the integrated LMS techniques of Kruger to improve learning management systems by integrating training information with live information from the system for which users are being trained (Kruger, col. 2, lines 41-44). This would ensure a better user experience due to an improved LMS.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Younkes et al. (U.S. Patent Application Publication No.2008/0160488) “3D graphics modeling and rendering, taking text and converting it into actual audio, and then combining the 3D graphics modeling with the audio, provide a very realistic virtual person.”
Any inquiry concerning this communication or earlier communications from the examiner should be directed to NAUMAN U AHMAD whose telephone number is (703)756-5306. The examiner can normally be reached Monday - Friday 9:00am - 5:00pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kee Tung can be reached at (571) 272-7794. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/N.U.A./Examiner, Art Unit 2611
/KEE M TUNG/Supervisory Patent Examiner, Art Unit 2611