DETAILED ACTION
Notice of Pre-AIA or AIA Status
1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
2. This Office Action is in response to the application filed on 05/02/2024.
3. The IDS filed on 05/02/2024 is considered and entered into the application file.
4. Claims 1-20 are pending, all the pending claims are examined and rejected herein.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
5. Claims 1-4, and 6-20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Vadodaria (US 10332297 B1).
Vadodaria is directed to Electronic Note Graphical User Interface Having Interactive Intelligent Agent And Specific Note Processing Features.
As per claim 1, Vadodaria discloses a method (see flowcharts of Figs.13, 23A-23B, 27, 36, and 39-40), comprising:
obtaining at least one response, generated by at least one language model, to be delivered by at least one processor-based digital human to at least one user, (FIG. 39 is an illustration of server side technical architecture (AI) that explains the underlying mechanism of the Artificial Intelligence Engine called Intelli-Agent and shows that this Intelli-Agent accepts a text input and generates a Meaning Representation Structure in the form of Concepts/Entities and Actions managed by a Conversation Management Module that receives user input and interacts with a Dialog State Module, a Psychology Engine Module, an NLP/NLU Engine, an NLG Engine, and a Back-end Proxy Module connected to a Backend Gateway and Service. Column 10, lines 60-column11, lines 2); wherein the at least one response comprises at least one predicted gesture label identifying at least one gesture associated with the at least one response (wherein the intelligent interactive agent executes GUI operations comprising tapping, swiping, pinching, searching for text, entering text, and displaying retrieved content, in the one or more mobile electronic display notes displayed in the container display matrix; column 3, lines 34-39
presenting a virtual touch display element to the at least one user, based at least in part on the at least one predicted gesture label, wherein the virtual touch display element comprises a plurality of selectable portions (FIG. 2 is an illustrative representation of mobile electronic display notes on a Note Graphical User Interface (NGUI) in a spread non-overlapping arrangement. FIG. 3 is an illustrative representation of mobile electronic display notes on a Note Graphical User Interface (NGUI) in an overlapping arrangement. FIG. 4 is an illustrative representation of mobile electronic display notes on a Note Graphical User Interface (NGUI) showing resizing of a note using a pinch feature. Column 8, lines 3-60);
determining two or more coordinates, in a plurality of dimensions, of at least one gesture of the at least one user, in connection with a given one of the plurality of selectable portions (For example, if the Note Graphical User Interface (NGUI) is positioned at coordinates (0,0), has a height of 100 px and a width of 80 px and is being swiped from right to left: Referring now to FIG. 28 which shows a line illustration of a Note Graphical User Interface (NGUI) mobile electronic display note container showing a swipe event's start location (Sw.sub.f) and end location (Sw.sub.t). column 31, lines 65-column 32, lines 4)
mapping the determined two or more coordinates of the at least one gesture to a selection of a given item associated with a corresponding one of the plurality of selectable portions (FIGS. 23a (top portion) and 23b (lower portion) is, as a combined figure a functional block diagram showing a feature that allows mapping of text input to mobile electronic display note, column 9, lines 60-63. Mapping text input to a mobile electronic display note, wherein an NLP Engine maps a text input given by a user to a mobile electronic display note; wherein verb words are linked to the Semantic Action; column 5, lines 34-41); and
initiating at least one automated action based at least in part on the selected given item; (When a Pinch gesture is initiated, the view is first brought into focus by following the Tap To Focus flow described earlier. Column 25, lines 5-7);
wherein the method is performed by at least one processing device comprising a processor coupled to a memory (Chino discloses an electronic note graphical user interface that has a human-like interactive intelligent animated agent and provides specific note processing features including multimodal hands free operation, and includes methods and systems that include a processor configured to provide an Intelligent Interactive Agent as a graphic animation to a user, Abstract).
As per claim 2, Vadodaria further discloses that the method of claim 1, further comprising providing a haptic feedback response to the at least one user in response to the at least one gesture of the at least one user (Fig. 18 is an illustrative representation of mobile electronic display notes on a Note Graphical User Interface (NGUI) showing pinch and zoom of a note. Also see the several gestures shown in Figs.5, 9, 10, 12, 5-17, 20-22).
As per claim 3, Vadodaria further discloses that the method of claim 1, wherein the at least one predicted gesture label is associated with a virtual interaction from the at least one user to the at least one processor-based digital human and further comprising determining two or more coordinates, in a plurality of dimensions, of at least one gesture of at least one body part of the at least one user towards the at least one processor-based digital human, associated with the virtual interaction, and extending a virtual representation of a corresponding at least one body part of the at least one processor-based digital human towards the at least one body part of the at least one user using a graphics engine (FIG. 19 is an illustrative representation of mobile electronic display notes on a Note Graphical User Interface (NGUI) showing details of pinch and zoom handling. FIG. 20 is an illustrative representation of mobile electronic display notes on a Note Graphical User Interface (NGUI) showing details of pinch and zoom with a note being zoomed and the view/display size increased. FIG. 21 is an illustrative representation of mobile electronic display notes on a Note Graphical User Interface (NGUI) showing details of pinch and zoom with a note being zoomed-in (pinched out) and the view/display size increased with on-the-fly addition of a new task item during frame movement. FIG. 22 is an illustrative representation of mobile electronic display notes on a Note Graphical User Interface (NGUI) showing details of pinch and zoom with a note being zoomed-out (pinched in) and the view/display size decreased. Column 9, lines 42-59).
As per claim 4, Vadodaria further discloses that the method of claim 3, further comprising providing a haptic feedback response to the at least one user in response to the extending the virtual representation of the corresponding at least one body part of the at least one processor-based digital human towards the at least one body part of the at least one user (As defined herein, feedback that comprises a graphical or spoken output from the portable electronic device is performed by an intelligent agent displayed by means of an Animated 3D Personal Virtual Assistant with Facial Expressions, hand gestures and body movements in a Human like appearance. column 16, lines 54-59. Also see Figs.5, 9, 10, 12, 5-18, 20-22).
As per claim 6, Vadodaria further discloses that the method of claim 1, wherein the at least one predicted gesture label is associated with a virtual interaction from the at least one processor-based digital human to the at least one user and further comprising providing two or more coordinates, in a plurality of dimensions, to a graphics engine that extends a virtual representation of at least one body part of the at least one processor-based digital human towards at least one body part of the at least one user (FIG. 22 is an illustrative representation of mobile electronic display notes on a Note Graphical User Interface (NGUI) showing details of pinch and zoom with a note being zoomed-out (pinched in) and the view/display size decreased. Column 9, lines 55-59).
As per claim 7, Vadodaria further discloses that the method of claim 6, further comprising providing a haptic feedback response to the at least one user in response to the at least one user extending a corresponding at least one body part of the at least one user towards the virtual representation of the at least one body part of the at least one processor-based digital human (Referring now to FIG. 21 which shows an illustrative representation of mobile electronic display notes on a Note Graphical User Interface (NGUI) showing details of pinch and zoom with a note being zoomed-in (pinched out) and the view/display size increased with on-the-fly addition of a new task item during frame movement, column 26, lines 62-67).
As per claim 8, Vadodaria further discloses that the method of claim 1, wherein the at least one automated action comprises one or more of generating one or more notifications related to the selected given item (a Virtual Event Interpreter which receives Virtual Events from the Virtual Event Queue and executes them on the GUI, and controls animation of a Personal Virtual Assistant to perform gestures corresponding to the Virtual Events, and notifies a Virtual Event Performer of completion of the Virtual Event; column 7, lines 15-20).
generating one or more signals related to the selected given item (For voice communications, the overall operation of the mobile device is substantially similar, except that the received signals are output to the speaker, and signals for transmission are generated by the microphone. Alternative voice or audio I/O subsystems, such as a voice message recording subsystem, can also be implemented on the mobile device column 14, lines 36-42); and
controlling a performance of at least one action in another system using the selected given item. Processor comprises electronic processing circuitry or control circuitry that operates to control the operations and performance of the electronic device and the application thereon. Column 14, lines 59-62).
As per apparatus claims 9-14, these claims include limitations that are similar to the method claims 1-4, and 6-7, respectively. Thus the apparatus claims are also rejected under similar citations given to the method claims.
As per non-transitory processor-readable storage medium claims 15-20, these claims include limitations that are similar to the method claims 1-4, and 6-7, respectively. Thus, the non-transitory processor-readable storage medium claims are also rejected under similar citations given to the method claims.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
6. Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Vadodaria in view of Hildreth ( US 20090228841 A1).
As illustrated in several figures such as figures 5, 9-10, 12, 15-18 and 20-22 Vadodaria discloses several types of gestural or virtual interactions applying to the displayed window but Chino fails to include the virtual interaction from the at least one user to the at least one processor-based digital human comprises one or more of a virtual handshake, a virtual fist bump, a virtual high five and a virtual thumbs up.
Hildreth on the other hand disclose the claimed gestural interactions. At [0024] Hildreth further discloses that common gestures used in everyday discourse include for instance, an "air quote" gesture, a bowing gesture, a curtsey, a cheek-kiss, a finger or hand motion, a genuflection, a head bobble or movement, a high-five, a nod, a sad face, a raised fist, a salute, a thumbs-up motion, a pinching gesture, a hand or body twisting gesture, or a finger pointing gesture.
Before effective filling date of the invention, it would have been obvious to a person of ordinary skill in the art to combine the type of gestures mentioned in Hildreth with Vadodaria so that the gestural interaction allow a user to communicate through more actions that would facilitate more human/system interaction, also provide feedback to the individual through display.
Therefore, it would have been obvious to combine Vadodaria with Hildreth to obtain the invention as specified in claim 5.
Conclusion
7. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
US 20040193413 A1 discloses A 3-D imaging system for recognition and interpretation of gestures to control a computer. The system includes a 3-D imaging system that performs gesture recognition and interpretation based on a previous mapping of a plurality of hand poses and orientations to user commands for a given user. When the user is identified to the system, the imaging system images gestures presented by the user, performs a lookup for the user command associated with the captured image(s), and executes the user command(s) to effect control of the computer, programs, and connected devices.
US 20060258443 A1 discloses A predetermined action is performed between a player object and another object positioned in a first determination range when a player designates said another object by controlling a pointing device. On the other hand, when the player performs an operation so as to designate said another object positioned outside the first determination range, a position of the player object is updated based on the designated position.
US 20210124425 A1 Disclosed are a method, an electronic device, and a storage medium of gesture recognition. The method includes: acquiring a hand image; extracting a first standard feature of a hand in the hand image based on a feature mapping model; the feature mapping model being obtained by training based on second standard features of hands in a synthesized image sample and a real image sample; obtaining three-dimensional coordinates of multiple key points of the hand by processing the first standard feature; determining a gesture of the hand based on the three-dimensional coordinates of the multiple key points
Any inquiry concerning this communication or earlier communications from the examiner should be directed to TADESSE HAILU whose telephone number is (571)272-4051; and the email address is Tadesse.hailu@USPTO.GOV. The examiner can normally be reached Monday- Friday 9:30-5:30 (Eastern time).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Bashore, William L. can be reached (571) 272-4088. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/TADESSE HAILU/Primary Examiner, Art Unit 2174