DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
This Office Action is responsive to: Application filed 15 Apr. 2024
Claims 1-18 are pending in this case. Claims 1, 17 and 18 are independent claims
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1-4, 8, 17 and 18 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Kanter (Pat. No.: US 9,230,160 B1; Filed: Aug. 27, 2012)
Regarding independent claims 1, 17 and 18, Kanter disclose a method implemented by one or more processors, the method comprising:
determining, by an automated assistant application, that one or both hands of a user are
located within a field of view of a camera of a computing device (col 3 lines 7-24; col 3 line 63 – col 4 line 55; col 5 lines 5-56),
wherein the automated assistant application is responsive to sign language commands performed by one or both hands of the user (col 3 lines 7-24; col 3 line 63 – col 4 line 55; col 5 lines 5-56);
causing a display interface of the computing device to render an output in response to
determining that one or both hands of the user are located within the field of view of the camera
of the computing device (col 3 lines 7-24; col 3 line 63 – col 4 line 55; col 5 lines 5-56),
wherein the output of the display interface indicates to the user that the automated
assistant application is available for receiving one or more sign language commands (col 3 lines 7-24; col 3 line 63 – col 4 line 55; col 5 lines 5-56);
determining, by the automated assistant application, that the user is providing the one or
more sign language commands (col 3 lines 7-24; col 3 line 63 – col 4 line 55; col 5 lines 5-56),
wherein the one or more sign language commands direct the automated assistant
application, and/or a separate application, to initialize one or more actions (col 3 lines 7-24), and
wherein the one or more sign language commands do not include an audible
input (col 3 lines 7-24);
causing the display interface of the computing device to render an additional output in
response to determining that the user is providing the one or more sign language commands (col 3 lines 7-24; col 3 line 63 – col 4 line 55; col 5 lines 5-56),
wherein the additional output indicates an interpretation of one or more sign
language commands as determined by the automated assistant application (col 3 lines 7-24; col 3 line 63 – col 4 line 55; col 5 lines 5-56); and
causing the automated assistant application, and/or the separate application, to initialize
the one or more actions in response to the user providing the one or more sign language
commands (col 3 lines 7-24; col 3 line 63 – col 4 line 55; col 5 lines 5-56).
Regarding dependent claim 2, Kanter disclose the method of claim 1, further comprising:
prior to determining that the one or more hands of the user are located within the field of
view of the camera of the computing device:
determining that the user is detected within the field of view of the camera of the
computing device, or is detected by an additional sensor of the computing device (col 3 lines 7-24; col 3 line 63 – col 4 line 55; col 5 lines 5-56),
wherein detection of the user by the camera or the additional sensor causes
the automated assistant application to initialize additional detection of one or more both hands of the user (col 3 lines 7-24; col 3 line 63 – col 4 line 55; col 5 lines 5-56).
Regarding dependent claim 3, Kanter disclose the method of claim 2, wherein determining that the user is detected within the field of view of the camera of the user includes determining that a face or a gaze of the user is directed towards the camera of the computing device (col 5 lines 5-11 and col 6 lines 4-19).
Regarding dependent claim 4, Kanter disclose the method of claim 2, wherein determining that the user is detected within the field of view of the camera of the user includes determining that a gaze of the user is directed towards one or more graphical elements that are static, or in motion, at the display interface of the computing device (col 3 line 62-col 4 line 28).
Regarding dependent claim 8, Kanter disclose the method of claim 1, wherein causing the display interface of the computing device to render the output includes:
causing the output to include a static, or dynamic, outline of one or both hands of the user
to be rendered at the display interface, or to include an avatar that is mimicking an arrangement
or a movement of one or more hands of the user (col 4 lines 29-55; col 6 lines 4-12).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 7 and 9-12 are rejected under 35 U.S.C. 103 as being unpatentable over Kanter in view of Valdiva et al. (Pub. No.: US 2018/0096507 A1; Filed: Oct. 2, 2017) (hereinafter “Valdiva”).
Regarding dependent claim 7, Kanter does not expressly disclose the method of claim 1, wherein causing the display interface of the computing device to render the additional output includes:
causing the additional output to include an animation that mimics movement of the one or
both hands of the user simultaneous to the user providing the one or more sign language commands.
Valdiva teach causing the additional output to include an animation that mimics movement of
the one or both hands of the user simultaneous to the user providing the one or more sign language commands (0172).
Therefore, prior to the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to combine Valdiva with Kanter for the benefit of providing an intuitive experience for users—one that gives the users a sense of “presence,” or the feeling that they are actually in the virtual environment (0005).
Regarding dependent claim 9, Kanter disclose the method of claim 1, further comprising:
determining, prior to causing the automated assistant application and/or the separate
application to initialize the one or more actions, that the user has completed providing the one or
more sign language commands (col 3 lines 4-24); and
Kanter does not expressly disclose causing, in response to determining that the user has completed providing the one or more sign language commands, the display interface of the computing device to render a graphical timer that indicates an amount of time before the automated assistant initializes the one or more actions,
wherein, during the amount of time before the automated assistant application
initializes the one or more actions, the automated assistant application can receive a
particular sign language command or other gesture for preventing initialization of the one
or more actions, and
wherein the one or more actions are initialized when the user does not provide the
particular sign language command during the amount of time.
Valdivia teach causing, in response to determining that the user has completed providing the one or more sign language commands, the display interface of the computing device to render a graphical timer that indicates an amount of time before the automated assistant initializes the one or more actions (0116; 0198),
wherein, during the amount of time before the automated assistant application
initializes the one or more actions, the automated assistant application can receive a
particular sign language command or other gesture for preventing initialization of the one
or more actions (0116; 0198), and
wherein the one or more actions are initialized when the user does not provide the
particular sign language command during the amount of time (0116; 0198).
Therefore, prior to the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to combine Valdiva with Kanter for the benefit of providing an intuitive experience for users—one that gives the users a sense of “presence,” or the feeling that they are actually in the virtual environment (0005).
Regarding dependent claim 10, Kanter disclose the method of claim 9, wherein determining that the user has completed providing the one or more sign language commands includes determining that one or both hands of the user are no longer within the field of view of the camera of the computing device (col 3 lines 7-24; col 3 line 63 – col 4 line 55; col 5 lines 5-56).
Regarding dependent claim 11, Kanter disclose the method of claim 10, wherein the other gesture includes the user relocating one or both hands of the user to be within the field of view of the camera of the computing device (col 3 lines 7-24; col 3 line 63 – col 4 line 55; col 5 lines 5-56).
Regarding dependent claim 12, Kanter in view of Valdiva disclose the method of claim 9, further comprising:
causing, in response to determining that the user has completed providing the one or
more sign language commands, the display interface of the computing device to render selectable
elements (col 3 lines 7-24; col 3 line 63 – col 4 line 55; col 5 lines 5-56),
wherein a particular selectable element of the selectable elements is selected in
response to the user providing the particular sign language command (col 3 lines 7-24; col 3 line 63 – col 4 line 55; col 5 lines 5-56), and
wherein the one or more actions are initialized when the user selects a separate
selectable element of the selectable elements during the amount of time for the graphical
timer (Valdiva 0116; 0198).
Claims 13-16 are rejected under 35 U.S.C. 103 as being unpatentable over Kanter in view of Rahmani et al. (Pub. No.: US 2023/0085161 A1; Filed: Jun 29, 2022) (hereinafter ‘Rahmani”).
Regarding dependent claim 13, Kanter does not expressly disclose the method of claim 1, wherein causing the display interface of the computing device to render the additional output comprises causing the display interface to provide an American Sign Language (ASL) Gloss interpretation of the one or more sign language commands.
Rahmani teach causing the display interface of the computing device to render the additional output comprises causing the display interface to provide an American Sign Language (ASL) Gloss interpretation of the one or more sign language commands (0020-0021; 0028; 0052-0057).
Therefore, prior to the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to combine Rahmani with Kanter for the benefit of improving the translation accuracy.
Regarding dependent claim 14, Kanter does not expressly disclose the method of claim 1, wherein causing the display interface of the computing device to render the additional output comprises causing the display interface to provide a natural language interpretation of an American Sign Language (ASL) Gloss interpretation of the one or more sign language commands.
Rahmani teach causing the display interface to provide a natural language interpretation of an American Sign Language (ASL) Gloss interpretation of the one or more sign language commands (0020-0021; 0028; 0052-0057).
Therefore, prior to the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to combine Rahmani with Kanter for the benefit of improving the translation accuracy.
Regarding dependent claim 15, Kanter in view of Rahmani disclose the method of claim 14, further comprising:
generating, using a generative model, the natural language interpretation of the ASL gloss
interpretation (0020-0021; 0028; 0052-0057).
Regarding dependent claim 16, Kanter in view of Rahmani disclose the method of claim 15, wherein the generative model is fine-tuned to generate the natural language interpretation of the ASL gloss interpretation, and wherein fine-tuning the generative model to generate the natural language interpretation of the ASL gloss interpretation comprises:
obtaining a plurality of training instances, each of the plurality of training instances
including training instance input and training instance output, the training instance input
including a corresponding training ASL gloss interpretation, and the training instance output
including a corresponding natural language interpretation of the corresponding ASL gloss
interpretation (0020-0021; 0028; 0052-0057); and
fine-tuning, based on the plurality of training instances, the generative model (0020-0021; 0028; 0052-0057).
Claims 5 and 6 are rejected under 35 U.S.C. 103 as being unpatentable over Kanter in view of Kim et al. (Pub. No.: US 2023/0085161 A1; Filed: Jun 29, 2022) (hereinafter “Kim”).
Regarding dependent claim 5, Kanter does not expressly disclose the method of claim 2, wherein determining that the user is within the field of view of the camera of the computing device, or is detected by the additional sensor of the computing device, is performed when the computing device is operating in a low power mode, relative to default or another power mode that the computing device is operating in when the user is providing the one or more sign language commands.
Kim teach wherein determining that the user is within the field of view of the camera of the computing device, or is detected by the additional sensor of the computing device, is performed when the computing device is operating in a low power mode, relative to default or another power mode that the computing device is operating in when the user is providing the one or more sign language commands (0008-0011; 0029-0032).
Therefore, prior to the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to combine Kimwith Kanter for the benefit of a camera for recognizing a user's gesture operates only when necessary, thereby reducing power consumption (0010).
Regarding dependent claim 6, Kanter in view of Kim disclose the method of claim 5,
wherein the camera of the computing device operates according to a reduced sampling
rate when the computing device is operating in the low power mode, or
wherein the camera is off and the additional sensor is operational when the computing
device is operating in the low power mode (0008-0011; 0029-0032; claim 6).
NOTE
It is noted that any citations to specific, pages, columns, lines, or figures in the prior art references and any interpretation of the reference should not be considered to be limiting in any way. A reference is relevant for all it contains and may be relied upon for all that it would have reasonably suggested to one having ordinary skill in the art. See MPEP 2123.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JAMES J DEBROW whose telephone number is (571)272-5768. The examiner can normally be reached on 09:00 - 06:00.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, William Bashore can be reached on 571-272-4088. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from Patent Center and the Private Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from Patent Center or Private PAIR. Status information for unpublished applications is available through Patent Center or Private PAIR to authorized users only. Should you have questions about access to Patent Center or the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) Form at https://www.uspto.gov/patents/uspto-automated- interview-request-air-form.
/James J Debrow/
Primary Patent Examiner
Art Unit 2174
571-272-5768