DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Note that citations to figures and elements should be understood to also implicitly refer to any pertinent explanatory text in the reference.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1, 3-9, 11-13, and 15-19, 21 are rejected, claims 1 and 3-5 in the alternative, under 35 U.S.C. 102(a)(1) as being anticipated by US 2012/0127323 A1 (Kasuya).
Regarding claim 1, Kasuya teaches a device management system (Abstract; Fig. 1) comprising:
a camera that captures a surrounding image of a surrounding area (Fig. 1 at 4);
an operation detector that detects an operation by a user (Fig. 2 at 14);
circuitry configured to, in response to the operation by the user ([75]), generate a partial image of the surrounding image based on a plurality of predetermined images included in the surrounding image ([36]; Figs. 3, 12); and
a display that displays the partial image (Fig. 1 at 2 OR 6), wherein
the circuitry is further configured to determine an area for the partial image in the surrounding image based on a range defined by the plurality of predetermined images ([36], [43], [48]; Fig. 6).
Regarding claim 3, Kasuya teaches wherein the plurality of predetermined images indicate a first end and a second end of the partial image in the surrounding image (Fig. 6), and the circuitry is configured to generate the partial image that includes the plurality of predetermined images ([71]-[72], Fig. 6).
Regarding claim 4, Kasuya teaches wherein the plurality of predetermined images are two-dimensional codes (Fig. 5 at 40, 41; Fig. 6 at 51A-51D).
Regarding claim 5, Kasuya teaches wherein each of the plurality of predetermined images is an image displayed on an object located in the partial image of the surrounding image (Fig. 6 at 51A-51D, Fig. 1 at 2).
Regarding claim 6, Kasuya teaches wherein the operation detector detects an instruction by the user to end generating the partial image of the surrounding image based on the plurality of predetermined images included in the surrounding image ([77]).
Regarding claim 7, Kasuya teaches a device adapted to connect to a terminal including a display (Abstract; Fig. 1), the device comprising
circuitry (Fig. 1 at 5; Fig. 2) configured to:
acquire a surrounding image captured by a camera ([30], [33]; Fig. 1 at 4, 5);
in response to an operation by a user ([75]), generate a partial image of the surrounding image based on a plurality of predetermined images included in the surrounding image ([36]; Figs. 3, 12); and
transmit the partial image to the terminal ([71]), wherein
the circuitry is further configured to determine an area for the partial image in the surrounding image based on a range defined by the plurality of predetermined images ([36], [43], [48]; Fig. 6).
Regarding claim 8, Kasuya teaches a method (Abstract; Fig. 1) comprising:
acquiring a surrounding image captured by an image-capturing device ([30], [33]; Fig. 1 at 4, 5);
in response to an operation by a user ([75]), generating a partial image of the surrounding image based on a plurality of predetermined images included in the surrounding image ([36]; Figs. 3, 12); and
transmitting the partial image to a terminal including a display ([71]), wherein
an area for the partial image in the surrounding image is determined based on a range defined by the plurality of predetermined images ([36], [43], [48]; Fig. 6).
Regarding claim 9, Kasuya teaches a non-transitory recording medium storing a plurality of instructions which, when executed by one or more processors, cause the one or more processors to perform the method according to claim 8 ([34]-[35]; Fig. 2).
Regarding claim 11, Kasuya teaches wherein the plurality of predetermined images indicate a first end and a second end of the partial image in the surrounding image (Fig. 6), and the circuitry is configured to generate the partial image that includes the plurality of predetermined images ([71], Fig. 6).
Regarding claim 12, Kasuya teaches wherein the plurality of predetermined images are two-dimensional codes (Fig. 5 at 40, 41; Fig. 6 at 51A-51D).
Regarding claim 13, Kasuya teaches wherein each of the plurality of predetermined images is an image displayed on an object located in the partial image of the surrounding image (Fig. 6 at 51A-51D, Fig. 1 at 2).
Regarding claim 15, Kasuya teaches wherein the operation detector detects an instruction by the user to end generating the partial image of the surrounding image based on the plurality of predetermined images included in the surrounding image ([77]).
Regarding claim 16, Kasuya teaches a device adapted to connect to a terminal including a display (Abstract; Fig. 1), the device comprising:
a camera (Fig. 1 at 4); and
circuitry (Fig. 1 at 3, 4, 5; Fig. 2) configured to:
acquire a surrounding image captured by the camera ([30], [33]; Fig. 1 at 4, 5);
in response to an operation by a user ([75]), clip a part of the surrounding image based on a plurality of predetermined images included in the surrounding image and generate a clipped image ([36]; Figs. 3, 12); and
transmit the clipped image to the terminal ([71]), wherein
the circuitry is further configured to determine the part to be clipped in the surrounding image based on a range defined by the plurality of predetermined images ([36], [43], [48]; Fig. 6).
Regarding claim 17, Kasuya teaches wherein the plurality of predetermined images indicate a first end and a second end of the part to be clipped in the surrounding image (Fig. 6), and the circuitry is configured to generate the clipped image by clipping the part that includes the plurality of predetermined images ([71], Fig. 6).
Regarding claim 18, Kasuya teaches wherein the predetermined images are two-dimensional codes (Fig. 5 at 40, 41; Fig. 6 at 51A-51D).
Regarding claim 19, Kasuya teaches wherein each of the plurality of predetermined images is an image displayed on an object located in the part to be clipped in the surrounding image (Fig. 6 at 51A-51D, Fig. 1 at 2).
Regarding claim 21, Kasuya teaches wherein the operation detector detects an instruction by the user to end clipping the part of the surrounding image based on the plurality of predetermined images included in the surrounding image ([77]).
Claims 1, 3-5 and 22 are rejected, claims 1 and 3-5 in the alternative, under 35 U.S.C. 102(a)(1) as being anticipated by JP 2014010568 A (Ujiie).
Regarding claim 1, Ujiie teaches a device management system comprising:
a camera that captures a surrounding image of a surrounding area ([48]);
an operation detector that detects an operation by a user ([114]);
circuitry configured to, in response to the operation by the user, generate a partial image of the surrounding image based on a plurality of predetermined images included in the surrounding image ([116]-[117]); and
a display that displays the partial image ([118]), wherein
the circuitry is further configured to determine an area for the partial image in the surrounding image based on a range defined by the plurality of predetermined images ([116]-[117]).
Regarding claim 3, Ujiie teaches wherein the plurality of predetermined images indicate a first end and a second end of the partial image in the surrounding image, and the circuitry is configured to generate the partial image that includes the plurality of predetermined images ([116]-[117]).
Regarding claim 4, Ujiie teaches wherein the plurality of predetermined images are two-dimensional codes ([193]).
Regarding claim 5, Ujiie teaches wherein each of the plurality of predetermined images is an image displayed on an object located in the partial image of the surrounding image ([66], [80], [116]-[117]).
Regarding claim 22, Ujiie teaches a terminal including a display (Fig. 2A(A) at 4), the terminal comprising circuitry configured to:
acquire a surrounding image captured by a camera ([48]);
in response to an operation by a user ([114]), clip a part of the surrounding image based on a plurality of predetermined images included in the surrounding image and generate a clipped image ([116]-[117]); and
display the clipped image in the display ([118]), wherein
the circuitry is further configured to determine the part to be clipped in the surrounding image based on a range defined by the plurality of predetermined images ([116]-[117]).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 10, 14, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over US 2012/0127323 A1 (Kasuya) as applied to claims 5, 13, and 19, respectively, above, and further in view of US 2017/0068321 A1 (Kuo).
Regarding claim 10, Kasuya does not expressly teach wherein the operation by the user is performed with respect to the object. Kuo teaches wherein the operation by the user is performed with respect to the object ([18]). The suggestion to modify the teaching of Kasuya by the teaching of Kuo is present as both Kasuya and Kuo teach a projector and camera system, each with different forms of input. The motivation is to provide additional means of input for a user. The combination would have been unsurprising and had a reasonable expectation of success because both Kasuya and Kuo teach a projector and camera system, each with different forms of input. Thus, before the effective filing date of the current application, the combination of Kasuya and Kuo would have rendered obvious, to one of ordinary skill in the art, wherein the operation by the user is performed with respect to the object.
Regarding claim 14, Kasuya does not expressly teach wherein the operation by the user is performed with respect to the object. Kuo teaches wherein the operation by the user is performed with respect to the object ([18]). The suggestion to modify the teaching of Kasuya by the teaching of Kuo is present as both Kasuya and Kuo teach a projector and camera system, each with different forms of input. The motivation is to provide additional means of input for a user. The combination would have been unsurprising and had a reasonable expectation of success because both Kasuya and Kuo teach a projector and camera system, each with different forms of input. Thus, before the effective filing date of the current application, the combination of Kasuya and Kuo would have rendered obvious, to one of ordinary skill in the art, wherein the operation by the user is performed with respect to the object.
Regarding claim 20, Kasuya does not expressly teach wherein the operation by the user is performed with respect to the object. Kuo teaches wherein the operation by the user is performed with respect to the object ([18]). The suggestion to modify the teaching of Kasuya by the teaching of Kuo is present as both Kasuya and Kuo teach a projector and camera system, each with different forms of input. The motivation is to provide additional means of input for a user. The combination would have been unsurprising and had a reasonable expectation of success because both Kasuya and Kuo teach a projector and camera system, each with different forms of input. Thus, before the effective filing date of the current application, the combination of Kasuya and Kuo would have rendered obvious, to one of ordinary skill in the art, wherein the operation by the user is performed with respect to the object.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to GENE W LEE whose telephone number is (571)270-7148. The examiner can normally be reached M-F 9:45am-6:15pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, LunYi Lao can be reached at 571-272-7671. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Gene W Lee/Primary Examiner, Art Unit 2621