Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
This is a Non-Final office action addressing reissue application 18/530,628 (“the ‘628 Reissue Application" or "reissue application"). The ‘628 Reissue Application was allowed as US Patent No. 11,194,466 (hereinafter “the ‘466 patent”) on December 7, 2021.
U.S. Patent Application 16/716,813 claims foreign priority to DE 10 2018 132 794.3, filed 12/19/2018.
A certified copy of the Foreign Priority document is found in the prosecution file of US patent application 16/716,813.
Because the instant reissue application was filed on or after September 16, 2012, the statutory provisions of the America Invents Act ("AIA ") will govern this reissue proceeding and all references to 35 U.S.C. 251 and 37 CFR 1.172, 1.175, and 3.73 are to the current provisions. 37 CFR 1.171 through 1.178 are rules directed to reissue.
Because the effective filing date of the related original patent application (16/716,813) that the reissue application is based on is on or after March 16, 2013, the AIA First Inventor to File ("AIA -FITF") provisions do apply.
The broadening reissue application 18/530,628 is timely filed (12/6/2023) based on filing within two years of the issue date of US 11,194,466 (12/7/2021).
Litigation
Applicant is reminded of the continuing obligation under 37 CFR 1.178(b), to timely apprise the Office of any prior or concurrent proceeding in which US 11,189,218 is or was involved. These proceedings would include any trial at the Patent Trial and Appeal Board, interferences, reissues, reexaminations, supplemental examinations, and litigation.
Applicant is further reminded of the continuing obligation under 37 CFR 1.56, to timely apprise the Office of any information which is material to patentability of the claims under consideration in this reissue application. These obligations rest with each individual associated with the filing and prosecution of this application for reissue. See also MPEP §§ 1404, 1442.01 and 1442.04.
Based on Examiner's independent review of US 11,194,466 and the prosecution history, no ongoing proceeding before the office or current ongoing litigation involving the US 11,194,466 patent is found. Also, based upon the Examiner's independent review of the patent itself and the prosecution history, the Examiner cannot locate any previous reexaminations, supplemental examinations, or certificates of correction. The original patent issued with claims 1-20 ("Patented Claims").
Prosecution History
The parent application (the ‘813 application), was originally filed on 12/17/2019.
On 12/9/2020, the claims were initially rejected under 35 U.S.C. 103 as being unpatentable over Bowman et al., U.S. Publication No. 2017/0330479.
On 3/9/2021, the claims were substantially amended to further recite the segmentation of the image and simultaneous nature of the input.
On 6/4/2021, the amended claims were rejected as being anticipated by Jung et al., U.S. Publication No. 2014/0006033.
On 9/7/2021, the claims were further amended (see below) which led to the 10/12/2021 notice of allowance.
The below underlined elements were added to the claim during prosecution of the ‘813 application:
1. A method for entering commands into an electronic device, the method comprising the steps of:
displaying an image on a touch-sensitive display unit of the electronic device such that a fingertip of a user's finger is movable over at least a partial area of the displayed image, the electronic device having a speech recognition unit by means of which acoustic inputs into the electronic device are recognized;
receiving a selection made with the fingertip of the user of the electronic device on at least a segment of an image displayed on the touch-sensitive display unit;
receiving an acoustic input with the speech recognition unit of the electronic device and subjecting at least the selected segment of the image to image analysis after receiving the acoustic input; and
generating a command for the electronic device or carrying out an action on the electronic device only if the fingertip is on at least the selected segment of the image and the acoustic input is received simultaneously which command or action has been shown or symbolized on the display unit, and/or runs a program on the electronic device as a result of the image analysis and displays information about at least the selected segment of the image on the display unit as a result of the image analysis.
Oath / Declaration
The reissue oath/declaration filed with this application is defective (see 37 CFR 1.175 and MPEP § 1414) because of the following:
The reissue oath/declaration filed with this application is defective because it fails to identify at least one error which is relied upon to support the reissue application. See 37 CFR 1.175 and MPEP § 1414. It is not sufficient for an oath or declaration to merely state "broader claims", but rather the oath or declaration must identify a specific error to be relied upon.
As per MPEP 1414 II B:
For an application filed on or after September 16, 2012 that seeks to enlarge the scope of the claims of the patent, the reissue oath or declaration must also identify a claim that the application seeks to broaden. A general statement, e.g., that all claims are broadened, is not sufficient to satisfy this requirement. In identifying the error, it is sufficient that the reissue oath/declaration identify a single word, phrase, or expression in the specification or in an original claim, and how it renders the original patent wholly or partly inoperative or invalid.
As per MPEP 1414 II C:
It is not sufficient for an oath/declaration to merely state “this application is being filed to correct errors in the patent which may be noted from the changes made in the disclosure.” Rather, the oath/declaration must specifically identify an error. In addition, it is not sufficient to merely reproduce the claims with brackets and underlining and state that such will identify the error. See In re Constant, 827 F.2d 728, 729, 3 USPQ2d 1479 (Fed. Cir.), cert. denied, 484 U.S. 894 (1987). Any error in the claims must be identified by reference to the specific claim(s) and the specific claim language wherein lies the error.
Accordingly, Claims 1-20 are rejected as being based upon a defective reissue Declaration under 35 U.S.C. 251 as set forth above. See 37 CFR 1.175.
The nature of the defect(s) in the Declaration is set forth in the discussion above in this Office action.
Claim Rejections - 35 USC § 112
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
Claims 2-6, 12-13, and 18-19 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention.
Claims 2, 12, and 18, as well as dependent claims 3-6, 13, and 19, each recite “segmenting at least the selected segment of the image as a result of the image analysis and displaying objects recognized by the image analysis on the display unit”, where this occurs after the audio input / image analysis, as per the corresponding independent claims. However, the specification is completely silent with regard to this sort of post audio confirmation, post image analysis segmentation.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Ghassabian, Patent No. 9,158,388 and Son et al., Publication No. 2016/0154624, hereinafter Son.
With regard to claims 1, 11, and 17, which teach “A method for entering commands into an electronic device,” Ghassabian teaches a system and method for entering commands into a device comprising a user interfaces, where commands are entered by a combination of touch and speech input (see paragraph 4, liens 8-30).
With regard to claims 1, 11, and 17, which teach “the method comprising the steps of: displaying an image on a touch-sensitive display unit of the electronic device such that a fingertip of a user's finger is movable over at least a partial area of the displayed image, the electronic device having a speech recognition unit by means of which acoustic inputs into the electronic device are recognized;” Ghassabian teaches providing a touch sensitive screen displaying an onscreen keyboard where the user is enabled to make a selection, via a user’s finger, stylus, etc., over partial area of the entire displayed image (key and or surrounding keys). (see 26:32-50 and 27:9-15)
With regard to claims 1, 11, and 17, which teach “receiving a selection made with the fingertip of the user of the electronic device on at least a segment of an image displayed on the touch-sensitive display unit; receiving an acoustic input with the speech recognition unit of the electronic device and subjecting at least the selected segment of the image to image analysis after receiving the acoustic input; and”, Ghassabian teaches receiving a user selection of an area of the display providing a subset of selectable options (e.g. ‘h’, ‘j’, ‘y’, ‘u’) and receiving an input from a user audibly speaking a letter intended to be entered (e.g. saying ‘h’). (see 26:32-27:16) This audio input is processed via voice recognition software. (see 28:20-30)
With regard to claims 1, 11, and 17, which teach “generating a command for the electronic device or carrying out an action on the electronic device only if the fingertip is on at least the selected segment of the image and the acoustic input is received simultaneously which command or action has been shown or symbolized on the display unit, and/or runs a program on the electronic device as a result of the image analysis and displays information about at least the selected segment of the image on the display unit as a result of the image analysis”, Ghassabian teaches generating a command, via program code, for entry into the text entry system when a finger is on an impact zone comprising a few potential character entry candidates and a user audibly states one of the characters in the impact zone, while the impact zone is pressed. (see 26:55-27:16 and 19:31-43) Following selection confirmation is displayed on the screen and / or audibly provided to the user.
Though the keypad displayed on the touchscreen is known to one of ordinary skill in the art as an ‘image’, Ghassabian doesn’t specifically describe it as such. Son teaches a system for using a combination of touch and voice input to perform an action, similar to that of Ghassabian, but further specifically teaches operation being performed on an image where the specific are of an image the user’s press is over is used in combination with a spoken command to generate input (see paragraphs 202-208). Here for example a specific area touched on a map can be segmented out and identified by coordinate information to be used to send to a user, via a coded program, using text or email software. Alternately, an image can be selected parsed for persons in the image, information about those images found, and the system enabling the user to selectively send, via a coded program, the image to those individuals, via a text or email program. It would be obvious to one of ordinary skill in the art at the time of the invention to use the image data with embedded date of Son in the system of Ghassabian, as this is likely how Ghassabian operates without explicitly noting so.
With regard to claims 2, 12, and 18, which teach “further comprising the steps of segmenting at least the selected segment of the image as a result of the image analysis and displaying objects recognized by the image analysis on the display unit”; Ghassabian further teaches after segmenting the set of all keys (‘a’ – ‘z’, ‘1’-‘0’, etc.), in to the segment including those keys in the impact zone (e.g. ‘h’, ‘j’, ‘y’, ‘u’), and receiving verbal input confirming an area of that zone (e.g. ‘h’, ‘hello’), further segmenting out the desired key and outputting objects recognized through the image analysis (e.g. ‘h’, ‘hello’, ‘elllo’, etc.). (see 26:55-27:16 and 19:31-43)
With regard to claim 3, which teaches “wherein, during said step of receiving an acoustic input, the speech recognition unit responds to a plurality of predefinable or teachable inputs”; Ghassabian further teaches using speech recognition to aid in input through predefined language database while allowing for further correction of unrecognized words. (see 27:16-28-30).
With regard to claim 4, which teaches “further comprising the step of assigning predetermined acoustic inputs to selectable areas on the display unit such that the action associated with each of the selectable areas is carried out only after the assigned predetermined acoustic input is received”; Ghassabian further teaches assigned regions to different keys with associated assigned acoustic information. (see 4:16-30)
With regard to claim 5, which teaches “further comprising the step of optically changing the selectable areas or at least the selected one of the selectable areas when the pointer sweeps over it”; Ghassabian further teaches a gliding action effecting changes to the selectable areas and providing input (see 21:3-45)
With regard to claim 6, which teaches “wherein the selected one of the selectable areas is or remains displayed on the display unit regardless of the action taken or initiated”; Ghassabian further teaches the original keypad, the selected zone, and the final key all remaining displayed through selection (supra).
With regard to claim 7, which teaches “wherein, during said step of receiving an acoustic input, the speech recognition unit responds only to a plurality of predefinable or teachable inputs”; Ghassabian further teaches using speech recognition to aid in input through predefined language database while allowing for further correction of unrecognized words. (see 27:16-28-30).
With regard to claim 8, which teaches “further comprising the step of assigning predetermined acoustic inputs to selectable areas on the display unit such that the action associated with each of the selectable areas is carried out only after the assigned predetermined acoustic input is received”; Ghassabian further teaches assigned regions to different keys with associated assigned acoustic information. (see 4:16-30)
With regard to claim 9, which teaches “further comprising the step of optically changing the selectable areas or at least the selected one of the selectable areas when the fingertip sweeps over it”; Ghassabian further teaches a gliding action effecting changes to the selectable areas and providing input. (see 21:3-45)
With regard to claim 10, which teaches “wherein the selected one of the selectable areas is or remains displayed on the display unit regardless of the action taken or initiated”; Ghassabian further teaches the original keypad, the selected zone, and the final key all remaining displayed through selection. (supra)
With regard to claims 13 and 19, which further teach “wherein the objects are displayed with further information”, Ghassabian further teaches presenting options for the potential output objects and then the associated output recognized through the image analysis (e.g. for ‘h’, present ‘hello’, ‘elllo’, etc. and for ‘t’, present ‘text’ ‘ext’, etc.). (see 26:55-27:16 and 19:31-43)
Son further teaches an operation being performed on an image where the specific area of an image the user’s press is over is used in combination with a spoken command to generate input that then provides an output on the screen of additional information (see paragraphs 202-208 and 216-218). Here information about a user in the picture can be provided, a confirmation of a message sent can be provide, etc.
With regard to claim 14, which further teaches “wherein the image on the touch-sensitive display unit represents a photo”, Son further teaches in paragraph 208, the image representing a photo (image of a map / key).
With regard to claim 15, which further teaches “wherein the image on the touch-sensitive display unit represents a photo without active areas”, Son further teaches in paragraph 208, the image representing a photo, without active areas.
With regard to claim 16, which further teaches “wherein the information about at least the selected segment is a webpage”, Ghassabian further teaches revealing information about a webpage. (19:1-50) Son further teaches executing another application (see paragraph 208) via revealed information.
With regard to claim 20, which further teaches “wherein the image on the touch-sensitive display unit represents a photo”, Son further teaches in paragraph 208, the image representing a photo, without active areas.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DENNIS G BONSHOCK whose telephone number is (571)272-4047. The examiner can normally be reached M-F 7:15 - 4:45.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Alexander Kosowski can be reached on 571-272-3744. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DENNIS G BONSHOCK/Primary Examiner, Art Unit 3992 Conferees:
/B. James Peikari/
Primary Examiner, Art Unit 3992
/ALEXANDER J KOSOWSKI/Supervisory Patent Examiner, Art Unit 3992