DETAILED ACTION
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1 – 17 are rejected under 35 U.S.C. 103 as being unpatentable over Kishimoto (Publication: US 2019/0005347 A1) in view of Kishi et al. (Publication: US 2017/0142340 A1) and Tsuzaki et al. (Publication: US 2008/0136754 A1).
Regarding claim 1, see rejection on claim 16.
Regarding claim 2, Kishimoto in view of Kishi, Tsuzaki disclose all the limitation of claim 1.
Kishimito discloses display the information regarding the target in the enlarged image ([0015] - identifying an OCR processing target area by capturing an entire document image and performing guide display of an area of an image capturing target by highlighting a portion of the target area with a red frame and the like so as to prompt a user to enlarge and capture an image of the identified target area.).
Regarding claim 3, Kishimoto in view of Kishi, Tsuzaki disclose all the limitation of claim 1.
Kishimito discloses acquire, as the information regarding the target, the information indicating a distance from an image capture device capturing the image to a prescribed position on the target ([0053] - When the difference (the change amount of the image capturing area) is a predetermined threshold value or greater (in other words, the camera of the mobile terminal is moved a predetermined distance or more) (YES in step S809), the main control unit 303 advances the processing to step S810. Whereas when the difference is not the predetermined threshold value or greater (NO in step S809), the main control unit 303 advances the processing to step S812. ).
Kishi discloses [0184] - displays, on the main screen G1, a region G13 where a result of execution of the tool in step S170 (i.e., the distance LL) is displayed.).
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to modify Kishimito in view of Kishi with display the information indicating the distance as taught by Kishi. The motivation for doing is to improve accuracy.
Regarding claim 4, Kishimoto in view of Kishi, Tsuzaki disclose all the limitation of claim 1.
Kishimito discloses acquire, as the information regarding the target, the information representing a surface of the target at a prescribed position on the target, and [0015] - When OCR result the information is obtained, if position coordinates of an area in which the information of an obtaining target is included (a data input area) is already known (for example, a business form in a known format), an area of an OCR processing target can be identified, and the OCR result can be obtained by performing OCR processing on the area, “surface”. When an image of a business form in a known format is captured, if a current image capturing range of the business form can be identified, an OCR processing target area (a data input area) can be identified based on a relative positional relationship, and thus an image of the target area may be enlarged and captured. Thus, the present applicant discusses a technique for first identifying an OCR processing target area by capturing an entire document image and performing guide display of an area of an image capturing target by highlighting a portion of the target area with a red frame and the like so as to prompt a user to enlarge and capture an image of the identified target area. The present applicant further discusses a technique for continuing the guide display by tracking and highlighting the area while the user performs an operation for gradually bringing the camera close to the OCR processing target area of the document after the entire document image is captured. Guiding an area to be enlarged and captured enables a user to avoid enlarging and capturing an image of a useless portion and to obtain an OCR result by efficiently capturing an image.).
Regarding claim 5, Kishimoto in view of Kishi, Tsuzaki disclose all the limitation of claim 4.
Kishimito discloses wherein the information representing the surface is the information indicating a direction on the surface ([0015] - When OCR result the information is obtained, if position coordinates of an area in which the information of an obtaining target is included (a data input area) is already known (for example, a business form in a known format), an area of an OCR processing target can be identified, and the OCR result can be obtained by performing OCR processing on the area, “surface”.).
Normal direction of the surface ([0322] In FIG. 21, an image pickup apparatus 711 (an example of the image pickup apparatus 12 shown in FIG. 13), a surface (a parallel surface 721) parallel to an image pickup surface of the image pickup apparatus 711. In computer graphics, an image's normal refers to a vector, or imaginary arrow, perpendicular to a surface. thus light direction of image pick up apparatus 711 is perpendicular to the surface .).
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to modify Kishimito in view of Kishi with Normal direction of the surface as taught by Kishi. The motivation for doing is to improve accuracy.
Regarding claim 7, Kishimoto in view of Kishi, Tsuzaki disclose all the limitation of claim 1.
Kishimito discloses display a candidate position regarding the second operation, identified based on the information regarding the target in the enlarged image ([0015] - the present performing guide display of an area of an image capturing target by highlighting a portion of the target area with a red frame and the like so as to prompt a user to enlarge and capture an image of the identified target area.).
Regarding claim 8, Kishimoto in view of Kishi, Tsuzaki disclose all the limitation of claim 7.
Kishimito discloses identify, as the candidate position, at least a portion of positions at which there is little variation in the vicinity in the information representing the surface of the target ([0053] - When the difference (the change amount of the image capturing area) is a predetermined threshold value or greater (in other words, the camera of the mobile terminal is moved a predetermined distance or more) (YES in step S809), the main control unit 303 advances the processing to step S810. Whereas when the difference is not the predetermined threshold value or greater (NO in step S809), the main control unit 303 advances the processing to step S812. “Little variation in the vicinity” can be read on.).
Regarding claim 9, Kishimoto in view of Kishi, Tsuzaki disclose all the limitation of claim 7.
Kishimito discloses identify the candidate position based on a distance from an image capture device capturing the image to a position on the target ([0053] - When the difference (the change amount of the image capturing area) is a predetermined threshold value or greater (in other words, the camera of the mobile terminal is moved a predetermined distance or more) (YES in step S809), the main control unit 303 advances the processing to step S810. Whereas when the difference is not the predetermined threshold value or greater (NO in step S809), the main control unit 303 advances the processing to step S812.).
Regarding claim 10, Kishimoto in view of Kishi, Tsuzaki disclose all the limitation of claim 9.
Kishimito discloses identify, as the candidate position, at least a portion of positions at which there is little variation in the vicinity in the distance from the image capture device capturing the image to the position on the target ([0053] - When the difference (the change amount of the image capturing area) is a predetermined threshold value or greater (in other words, the camera of the mobile terminal is moved a predetermined distance or more) (YES in step S809), the main control unit 303 advances the processing to step S810. Whereas when the difference is not the predetermined threshold value or greater (NO in step S809), the main control unit 303 advances the processing to step S812. “Little variation in the vicinity” can be read on.).
Regarding claim 11, Kishimoto in view of Kishi, Tsuzaki disclose all the limitation of claim 9.
Kishimito discloses identify, as the candidate position, at least a portion of positions at which the distance from the image capture device capturing the image to the position on the target is short ([0053] - When the difference (the change amount of the image capturing area) is a predetermined threshold value or greater (in other words, the camera of the mobile terminal is moved a predetermined distance or more) (YES in step S809), the main control unit 303 advances the processing to step S810. Whereas when the difference is not the predetermined threshold value or greater (NO in step S809), the main control unit 303 advances the processing to step S812. ).
Regarding claim 13, Kishimoto in view of Kishi, Tsuzaki disclose all the limitation of claim 1.
Kishimito discloses transmit out an instruction signal including coordinates indicating the selected point ([0015] - When an image of a business form in a known format is captured, if a current image capturing range of the business form can be identified, an OCR processing target area (a data input area) can be identified based on a relative positional relationship, and thus an image of the target area may be enlarged and captured. [0021] An Input/Output interface 204 transmits and receives data to and from the touch panel 102.).
Regarding claim 14, Kishimoto in view of Kishi, Tsuzaki disclose all the limitation of claim 13.
Kishimito discloses wherein the instruction signal further includes the information regarding the target at the selected point ([0015] - When an image of a business form in a known format is captured, if a current image capturing range of the business form can be identified, an OCR processing target area (a data input area) can be identified based on a relative positional relationship, and thus an image of the target area may be enlarged and captured. [0021] An Input/Output interface 204 transmits and receives data to and from the touch panel 102.
[0027] An operation the information obtainment unit 305 obtains the information indicating a content of a user operation performed via the UI displayed by the information display unit 304 and notifies the main control unit 303 of the obtained the information. For example, when a user touches the area 401 with his/her hand, the operation the information obtainment unit 305 detects the information of a touched position on the screen and transmits the information of the detected position to the main control unit 303.).
Regarding claim 15, see rejection on claim 16.
Regarding claim 16, Kishimito discloses a non-transitory storage medium that stores a program for causinq[0020] FIG. 2 illustrates an example of a hardware configuration of the mobile terminal 100. The mobile terminal 100 is constituted of various units (201 to 207). A central processing unit (CPU) 201 is a unit for executing various programs and realizing various functions. A random access memory (RAM) 202 is a unit for storing various the information pieces. The RAM 202 is further used as a temporary working and storage area of the CPU 201. A read-only memory (ROM) 203 is a storage medium for storing various programs and the like. The ROM 203 may be a storage medium such as a flash memory, a solid state disk (SSD), and a hard disk drive (HDD). The CPU 201 loads a program stored in the ROM 203 to the RAM 202 and executes the program. Accordingly, the CPU 201 functions as each processing unit of the mobile application illustrated in FIG. 3 and executes processing in each step in sequences):
detecting that a first operation for designating a position in an image has been performed ([0015] - identifying an OCR processing target area by capturing an entire document image and performing guide display of an area of an image capturing target by highlighting a portion of the target area with a red frame and the like so as to prompt a user to enlarge and capture an image of the identified target area.);
displaying an enlarged image of a area at the position ([0015] - identifying an OCR processing target area by capturing an entire document image and performing guide display of an area of an image capturing target by highlighting a portion of the target area with a red frame.);
acquiring the information regarding a target appearing in the enlarged image ([0015] - the present performing guide display of an area of an image capturing target by highlighting a portion of the target area with a red frame and the like so as to prompt a user to enlarge and capture an image of the identified target area.); and
displaying the information regarding the target ([0015] - the present performing guide display of an area of an image capturing target by highlighting a portion of the target area with a red frame and the like so as to prompt a user to enlarge and capture an image of the identified target area.);
detect the second operation and identify, as a selected point in the image, a position at which the second operation ended ([0015] - identifying an OCR processing target area by capturing an entire document image and performing guide display of an area of an image capturing target by highlighting a portion of the target area with a red frame and the like so as to prompt a user to enlarge and capture an image of the identified target area.
[0053] - When the difference (the change amount of the image capturing area) is a predetermined threshold value or greater (in other words, the camera of the mobile terminal is moved a predetermined distance or more) (YES in step S809), the main control unit 303 advances the processing to step S810 and ended at S813, “a position at which the second operation ended”.).
Kishimito does hot However Kishi discloses display a vicinity area ([0332] In the example shown in FIG. 22, with respect to the image 812 of the target object, the frame region 831 is set on the upper side and in the vicinity of the horizontal center, the frame region 832 is set on the right side and in the vicinity of the vertical center, the frame region 833 is set on the lower side and in the vicinity of the horizontal center, and the frame region 834 is set on the left side and in the vicinity of the vertical center.).
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to modify Kishimito in view of Kishi with display the information indicating the distance as taught by Kishi. The motivation for doing is to improve accuracy.
Kishimoto in view of Kishi do not however Tsuzaki discloses
display the information regarding the target while receiving a second operation designating a position in the enlarged image ([0342] First of all, the user starts an operation to bring two fingers thereof into contact with the two ends of a region in which the user wants to display an enlarged image selected among images appearing on the display screen of the input/output display 22 as shown in FIG. 24B and, then, while sustaining the two fingers in the state of being in contact with the surface of the display screen thereafter, the user moves the two fingers over the surface of the display screen in order to separate the fingers from each other to result in an enlarged image as shown in FIG. 24C when the user removes the two fingers from the surface of the display screen.);
wherein the second operation includes a touching operation with a finger to the enlarged image displayed on a touch panel-type display, and the position at which the second operation ended is a position where the finger is removed from the touch panel-type display ([0342] First of all, the user starts an operation to bring two fingers thereof into contact with the two ends of a region in which the user wants to display an enlarged image selected among images appearing on the display screen of the input/output display 22 as shown in FIG. 24B and, then, while sustaining the two fingers in the state of being in contact with the surface of the display screen thereafter, the user moves the two fingers over the surface of the display screen in order to separate the fingers from each other to result in an enlarged image as shown in FIG. 24C when the user removes the two fingers from the surface of the display screen. ) .
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to modify Kishimito in view of Kishi with display the information regarding the target while receiving a second operation designating a position in the enlarged image; wherein the second operation includes a touching operation with a finger to the enlarged image displayed on a touch panel-type display, and the position at which the second operation ended is a position where the finger is removed from the touch panel-type display as taught by Tsuzaki . The motivation for doing so is to decrease load borne.
Response to Arguments
Claim Rejection Under 35 U.S.C. 103
Applicant asserts “without acquiescing to the merits of the rejection, the references do not reasonably suggest the alleged "display the information regarding the target" also "while receiving a second operation designating a position in the enlarged image", in contrast to the claimed "display the information regarding the target while receiving a second operation designating a position in the enlarged image." Instead, see page 6 of the Office Action where those alleged OCR or "display the information" features are asserted to be "to prompt a user to enlarge and capture an image ... ", which, even if accurate, is not indicative of the "while receiving a second operation designating a position in the enlarged image" claim features much less also of the "detect the second operation and identify, as a selected point in the image, a position at which the second operation ended" claim features.”
The argument has been fully considered and is persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Tsuzaki reference.
Examiner suggests to amend a specific element in the claim that when reading a claim in light of the invention, it directs to a unique technology. The examiner can be reached at 571-270-0724 for further discussion.
Regarding claims 2 – 14, and 17, the Applicant asserts that they are not obvious over based on their dependency from independent claim 1. The examiner cannot concur with the Applicant respectfully from same reason noted in the examiner’s response to argument asserted from claim 1.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Ming Wu whose telephone number is (571) 270-0724. The examiner can normally be reached on Monday-Thursday and alternate Fridays (9:30am - 6:00pm) EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Devona Faulk can be reached on 571-272-7515. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
The information regarding the status of an application may be obtained from the Patent Application The information Retrieval (PAIR) system. Status the information for published applications may be obtained from either Private PAIR or Public PAIR. Status the information for unpublished applications is available through Private PAIR only. For more the information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated the information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Ming Wu/
Primary Examiner, Art Unit 2616