DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55.
Claim Objections
Claim 12 is objected to because of the following informalities: claims limitation states: “a position image representing the second image capturing position that is closet to the first image capturing position”. Examiner presumes the statement should read as, “a position image representing the second image capturing position that is closest to the first image capturing position” Appropriate correction is required.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-3, 6-8, 10, 12-14, and 16-18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Shohara et al (US Pub. 20150042647) in view of Sawada et al (US Pub. 20230308622).
Regarding claim 1, Shohara discloses:
An information processing apparatus, (at least refer to fig. 1 and paragraph 25. Describes an omnidirectional imaging device 110) comprising:
Circuitry configured to generate a screen including a first captured image display area displaying a first predetermined-area image and a three-dimensional image display area, (at least refer to fig. 2-3, 6 and paragraphs 45, 54. Describes the two photographic images obtained by the two imaging optical systems 212A and 212B are combined and the distortion and vertical distortion thereof are corrected, using the information from a not-shown three-axis acceleration sensor. In the image-combining process, at first, the omnidirectional images having a partly overlapped hemisphere image are generated for each of each photographic image configured to be the plane surface image. Para. 54, describes: the plane image generator 260 generates the output image of the plane image by the perspective projection of the three-dimensional model having the omnidirectional image attached to the inner surface of the sphere)
The first predetermined-area image being a first predetermined area of a first captured image, the first captured image being obtained by capturing an object with an image capturing device at a first image capturing position, (at least refer to fig. 2-3 and paragraph 45. Describes the two photographic images obtained by the two imaging optical systems 212A and 212B are combined and the distortion and vertical distortion thereof are corrected, using the information from a not-shown three-axis acceleration sensor. In the image-combining process, at first, the omnidirectional images having a partly overlapped hemisphere image are generated for each of each photographic image configured to be the plane surface image) and
The three-dimensional image display area displaying at least a part of a three-dimensional image aligned with the first captured image, the three-dimensional image including a position image indicating a second image capturing position of the image capturing device at a specific date and time of image capturing, (at least refer to fig. 6-8 and paragraphs 54, 64. Describes the plane image generator 260 generates the output image of the plane image by the perspective projection of the three-dimensional model having the omnidirectional image attached to the inner surface of the sphere. The image generation parameter during the perspective projection is determined according to the input value. Para. 64, describes: according to the pan, tilt and zoom designation values determined as a result of the operation for changing the display range, the changed image-processing parameter is decided, and the process proceeds to step S102. In the following step S102, the generation process of the plane image is performed according to the image-processing parameter after being changed by the plane image generator 260. In step S103, the image display area 310 of the image viewer surface 300 is updated by the image output part 262 with the plane image newly generated according to the user operation); and
A memory that stores the second image capturing position, a second predetermined-area image, and data in association with one another, (at least refer to fig. 2, 7 and paragraphs 38, 51. Describes the omnidirectional image storage part 256 stores the omnidirectional image which is imaged by the omnidirectional imaging device 110 and is input to the image processors 120 to 124 via the above-described connection or external recording medium. The user input receiver (receiver, reception unit) 258 receives the input value providing the output range of the omnidirectional image according to the operation based on the changing operation of output range performed through the input part 252, and sends the input value to the plane image generator 260. Para. 51, describes: the omnidirectional image is stored in the omnidirectional image storage part 256 and then, input and converted to the output image through the image processing by the plane image generator 260)
The second predetermined-area image being a second predetermined area of a second captured image, the second captured image being obtained by capturing the object with the image capturing device at the specific date and time of image capturing that is associated with the second image capturing position indicated by the position image, (at least refer to fig. 6-8 and paragraphs 52, 64. Describes the plane image generator 260 receives the input value including the above-described pan designation value, tilt designation value and zoom designation value as a result of the changing operation of the output range from the user input receiver 258. The plane image generator 260 determines an image generation parameter according to the input value as follows, and performs the image generation process of the output image according to the decided image-generation parameter. Para. 64, describes: according to the pan, tilt and zoom designation values determined as a result of the operation for changing the display range, the changed image-processing parameter is decided, and the process proceeds to step S102. In the following step S102, the generation process of the plane image is performed according to the image-processing parameter after being changed by the plane image generator 260)
Wherein the circuitry is configured to cause the screen to additionally include a second captured image display area displaying the second predetermined-area image, (at least refer to fig. 2-3 and paragraph 45. Describes the two photographic images obtained by the two imaging optical systems 212A and 212B are combined and the distortion and vertical distortion thereof are corrected, using the information from a not-shown three-axis acceleration sensor. In the image-combining process, at first, the omnidirectional images having a partly overlapped hemisphere image are generated for each of each photographic image configured to be the plane surface image. Then, the positions of the two omnidirectional images including each hemisphere part are adjusted in accordance with the overlapped area-matching operation, and combined. Therefore, the omnidirectional image having a whole sphere is generated).
Shohara does not explicitly disclose:
the second captured image being obtained by capturing the object with the image capturing device at the specific date and time of image capturing that is associated with the second image capturing position indicated by the position image
a text display area including the text data.
Sawada teaches:
the second captured image being obtained by capturing the object with the image capturing device at the specific date and time of image capturing that is associated with the second image capturing position indicated by the position image, (at least refer to fig. 14 and paragraph 190. Describes examples of the imaging start date and time include the date and time when the user input an image capturing request to the communication terminal 30, and the date and time when the image capturing apparatus 10 captured an image such as a wide-view image. The imaging start date and time information may be time stamp information of a captured image such as a wide-view image)
a text display area including the text data, (at least refer to fig. 20 and paragraph 218. Describes the first image field 211 also displays a device name 214. The device name 214 is transmitted from the image capturing apparatus 10 together with the wide-view image. The device name 214 is information set by the user a or the like).
The two references are analogous art because they both relate with the same field of invention of wide view display device.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate text display area as taught by Sawada with the image processing in the display terminal as disclosed by Shohara. The motivation to combine the Sawada reference is to provide an authentication information that marks the generated images for easy identification.
Regarding claim 17, Shohara discloses:
A screen generating, (at least refer to fig. 1 and paragraph 25. Describes an omnidirectional image display system 100) method comprising:
Generating a screen including a first captured image display area and a three-dimensional image display area, (at least refer to fig. 2-3, 6 and paragraphs 45, 54. Describes the two photographic images obtained by the two imaging optical systems 212A and 212B are combined and the distortion and vertical distortion thereof are corrected, using the information from a not-shown three-axis acceleration sensor. In the image-combining process, at first, the omnidirectional images having a partly overlapped hemisphere image are generated for each of each photographic image configured to be the plane surface image. Para. 54, describes: the plane image generator 260 generates the output image of the plane image by the perspective projection of the three-dimensional model having the omnidirectional image attached to the inner surface of the sphere)
The first captured image display area displaying a first predetermined-area image being a first predetermined area of a first captured image, the first captured image being obtained by capturing an object with an image capturing device at a first image capturing position, (at least refer to fig. 2-3 and paragraph 45. Describes the two photographic images obtained by the two imaging optical systems 212A and 212B are combined and the distortion and vertical distortion thereof are corrected, using the information from a not-shown three-axis acceleration sensor. In the image-combining process, at first, the omnidirectional images having a partly overlapped hemisphere image are generated for each of each photographic image configured to be the plane surface image)
The three-dimensional image display area displaying at least a part of a three-dimensional image aligned with the first captured image, the three-dimensional image including a position image indicating a second image capturing position of the image capturing device at a specific date and time of image capturing, (at least refer to fig. 6-8 and paragraphs 54, 64. Describes the plane image generator 260 generates the output image of the plane image by the perspective projection of the three-dimensional model having the omnidirectional image attached to the inner surface of the sphere. The image generation parameter during the perspective projection is determined according to the input value. Para. 64, describes: according to the pan, tilt and zoom designation values determined as a result of the operation for changing the display range, the changed image-processing parameter is decided, and the process proceeds to step S102. In the following step S102, the generation process of the plane image is performed according to the image-processing parameter after being changed by the plane image generator 260. In step S103, the image display area 310 of the image viewer surface 300 is updated by the image output part 262 with the plane image newly generated according to the user operation); and
Storing, in a memory, the second image capturing position, a second predetermined-area image, and data in association with one another, (at least refer to fig. 2, 7 and paragraphs 38, 51. Describes the omnidirectional image storage part 256 stores the omnidirectional image which is imaged by the omnidirectional imaging device 110 and is input to the image processors 120 to 124 via the above-described connection or external recording medium. The user input receiver (receiver, reception unit) 258 receives the input value providing the output range of the omnidirectional image according to the operation based on the changing operation of output range performed through the input part 252, and sends the input value to the plane image generator 260. Para. 51, describes: the omnidirectional image is stored in the omnidirectional image storage part 256 and then, input and converted to the output image through the image processing by the plane image generator 260)
The second predetermined-area image being a second predetermined area of a second captured image, the second captured image being obtained by capturing the object with the image capturing device at the specific date and time of image capturing that is associated with the second image capturing position indicated by the position image, (at least refer to fig. 6-8 and paragraphs 52, 64. Describes the plane image generator 260 receives the input value including the above-described pan designation value, tilt designation value and zoom designation value as a result of the changing operation of the output range from the user input receiver 258. The plane image generator 260 determines an image generation parameter according to the input value as follows, and performs the image generation process of the output image according to the decided image-generation parameter. Para. 64, describes: according to the pan, tilt and zoom designation values determined as a result of the operation for changing the display range, the changed image-processing parameter is decided, and the process proceeds to step S102. In the following step S102, the generation process of the plane image is performed according to the image-processing parameter after being changed by the plane image generator 260)
Wherein the generating includes generating the screen to additionally include a second captured image display area displaying the second predetermined-area image, (at least refer to fig. 2-3 and paragraph 45. Describes the two photographic images obtained by the two imaging optical systems 212A and 212B are combined and the distortion and vertical distortion thereof are corrected, using the information from a not-shown three-axis acceleration sensor. In the image-combining process, at first, the omnidirectional images having a partly overlapped hemisphere image are generated for each of each photographic image configured to be the plane surface image. Then, the positions of the two omnidirectional images including each hemisphere part are adjusted in accordance with the overlapped area-matching operation, and combined. Therefore, the omnidirectional image having a whole sphere is generated).
Shohara does not explicitly disclose:
the second captured image being obtained by capturing the object with the image capturing device at the specific date and time of image capturing that is associated with the second image capturing position indicated by the position image
a text display area including the text data.
Sawada teaches:
the second captured image being obtained by capturing the object with the image capturing device at the specific date and time of image capturing that is associated with the second image capturing position indicated by the position image, (at least refer to fig. 14 and paragraph 190. Describes examples of the imaging start date and time include the date and time when the user input an image capturing request to the communication terminal 30, and the date and time when the image capturing apparatus 10 captured an image such as a wide-view image. The imaging start date and time information may be time stamp information of a captured image such as a wide-view image)
a text display area including the text data, (at least refer to fig. 20 and paragraph 218. Describes the first image field 211 also displays a device name 214. The device name 214 is transmitted from the image capturing apparatus 10 together with the wide-view image. The device name 214 is information set by the user a or the like).
Regarding the rejection of claim 17, refer to the motivation of claim 1.
Regarding claim 18, Shohara discloses:
A non-transitory recording medium storing a plurality of instructions which, when executed by one or more processors, causes the one or more processors to perform a screen generating, (at least refer to fig. 1, 12 and paragraphs 25, 89. Describes an omnidirectional image display system 100. Para. 89, describes: The flash memory 14 stores an OS to control the tablet terminal 122, a control program to perform the above-described function parts, various system and setting information, and user data including the above-described omnidirectional image. The recording medium which stores the user data such as the omnidirectional image is inserted to the slot of the external recording medium 16) method comprising:
Generating a screen including a first captured image display area and a three-dimensional image display area, (at least refer to fig. 2-3, 6 and paragraphs 45, 54. Describes the two photographic images obtained by the two imaging optical systems 212A and 212B are combined and the distortion and vertical distortion thereof are corrected, using the information from a not-shown three-axis acceleration sensor. In the image-combining process, at first, the omnidirectional images having a partly overlapped hemisphere image are generated for each of each photographic image configured to be the plane surface image. Para. 54, describes: the plane image generator 260 generates the output image of the plane image by the perspective projection of the three-dimensional model having the omnidirectional image attached to the inner surface of the sphere)
The first captured image display area displaying a first predetermined-area image being a first predetermined area of a first captured image, the first captured image being obtained by capturing an object with an image capturing device at a first image capturing position, (at least refer to fig. 2-3 and paragraph 45. Describes the two photographic images obtained by the two imaging optical systems 212A and 212B are combined and the distortion and vertical distortion thereof are corrected, using the information from a not-shown three-axis acceleration sensor. In the image-combining process, at first, the omnidirectional images having a partly overlapped hemisphere image are generated for each of each photographic image configured to be the plane surface image)
The three-dimensional image display area displaying at least a part of a three-dimensional image aligned with the first captured image, the three-dimensional image including a position image indicating a second image capturing position of the image capturing device at a specific date and time of image capturing, (at least refer to fig. 6-8 and paragraphs 54, 64. Describes the plane image generator 260 generates the output image of the plane image by the perspective projection of the three-dimensional model having the omnidirectional image attached to the inner surface of the sphere. The image generation parameter during the perspective projection is determined according to the input value. Para. 64, describes: according to the pan, tilt and zoom designation values determined as a result of the operation for changing the display range, the changed image-processing parameter is decided, and the process proceeds to step S102. In the following step S102, the generation process of the plane image is performed according to the image-processing parameter after being changed by the plane image generator 260. In step S103, the image display area 310 of the image viewer surface 300 is updated by the image output part 262 with the plane image newly generated according to the user operation); and
Storing, in a memory, the second image capturing position, a second predetermined-area image, and data in association with one another, (at least refer to fig. 2, 7 and paragraphs 38, 51. Describes the omnidirectional image storage part 256 stores the omnidirectional image which is imaged by the omnidirectional imaging device 110 and is input to the image processors 120 to 124 via the above-described connection or external recording medium. The user input receiver (receiver, reception unit) 258 receives the input value providing the output range of the omnidirectional image according to the operation based on the changing operation of output range performed through the input part 252, and sends the input value to the plane image generator 260. Para. 51, describes: the omnidirectional image is stored in the omnidirectional image storage part 256 and then, input and converted to the output image through the image processing by the plane image generator 260)
The second predetermined-area image being a second predetermined area of a second captured image, the second captured image being obtained by capturing the object with the image capturing device at the specific date and time of image capturing that is associated with the second image capturing position indicated by the position image, (at least refer to fig. 6-8 and paragraphs 52, 64. Describes the plane image generator 260 receives the input value including the above-described pan designation value, tilt designation value and zoom designation value as a result of the changing operation of the output range from the user input receiver 258. The plane image generator 260 determines an image generation parameter according to the input value as follows, and performs the image generation process of the output image according to the decided image-generation parameter. Para. 64, describes: according to the pan, tilt and zoom designation values determined as a result of the operation for changing the display range, the changed image-processing parameter is decided, and the process proceeds to step S102. In the following step S102, the generation process of the plane image is performed according to the image-processing parameter after being changed by the plane image generator 260)
Wherein the generating includes generating the screen to additionally include a second captured image display area displaying the second predetermined-area image, (at least refer to fig. 2-3 and paragraph 45. Describes the two photographic images obtained by the two imaging optical systems 212A and 212B are combined and the distortion and vertical distortion thereof are corrected, using the information from a not-shown three-axis acceleration sensor. In the image-combining process, at first, the omnidirectional images having a partly overlapped hemisphere image are generated for each of each photographic image configured to be the plane surface image. Then, the positions of the two omnidirectional images including each hemisphere part are adjusted in accordance with the overlapped area-matching operation, and combined. Therefore, the omnidirectional image having a whole sphere is generated).
Shohara does not explicitly disclose:
the second captured image being obtained by capturing the object with the image capturing device at the specific date and time of image capturing that is associated with the second image capturing position indicated by the position image
a text display area including the text data.
Sawada teaches:
the second captured image being obtained by capturing the object with the image capturing device at the specific date and time of image capturing that is associated with the second image capturing position indicated by the position image, (at least refer to fig. 14 and paragraph 190. Describes examples of the imaging start date and time include the date and time when the user input an image capturing request to the communication terminal 30, and the date and time when the image capturing apparatus 10 captured an image such as a wide-view image. The imaging start date and time information may be time stamp information of a captured image such as a wide-view image)
a text display area including the text data, (at least refer to fig. 20 and paragraph 218. Describes the first image field 211 also displays a device name 214. The device name 214 is transmitted from the image capturing apparatus 10 together with the wide-view image. The device name 214 is information set by the user a or the like).
Regarding the rejection of claim 17, refer to the motivation of claim 1.
Regarding claim 2, Shohara does not disclose:
Wherein the circuitry is configured to generate the text data, the text data being speech text generated from voice collected at a date and time when the captured image was captured.
Sawada teaches:
Wherein the circuitry is configured to generate the text data, the text data being speech text generated from voice collected at a date and time when the captured image was captured, (at least refer to fig. 20 and paragraph 218, 132. Describes the first image field 211 also displays a device name 214. The device name 214 is transmitted from the image capturing apparatus 10 together with the wide-view image. The device name 214 is information set by the user a or the like. Para. 132, describes: The input means are not limited to the keyboard 311 and the pointing device 312 and may be a touch panel, a voice input device, or the like. The microphone 318 is an example of a built-in sound collecting means for receiving input sounds. The audio input/output I/F 317 is a circuit for controlling input and output of audio signals between the microphone 318 and the speaker 319 under control of the CPU 301).
It has been held that a recitation with respect to the manner in which a claimed apparatus is intended to be employed does not differentiate the claimed apparatus from a prior art apparatus satisfying the claimed structural limitations. Ex parte Masham, 2 USPQ2d 1647 (1987)
Regarding claim 3, Shohara does not disclose:
Wherein the text data is text being input.
Sawada teaches:
Wherein the text data is text being input, (at least refer to fig. 20 and paragraph 218. Describes the first image field 211 also displays a device name 214. The device name 214 is transmitted from the image capturing apparatus 10 together with the wide-view image. The device name 214 is information set by the user a or the like. Para. 167, describes: In an example, the communication terminal 30 may include a touch panel or an interface for gesture or voice input. In this example, the communication terminal 30 may accept various selections or operation inputs in accordance with a touch input, a gesture, or a voice input).
Regarding the rejection of claim 3, refer to the motivation of claim 1.
Regarding claim 6, Shohara discloses:
wherein the circuitry is configured to superimpose, on the three-dimensional image display area, an image representing a first virtual camera at the first image capturing position, the first virtual camera having an imaging area determined by an angle of view representing the first predetermined-area image being displayed, (at least refer to fig. 6-8 and paragraphs 54, 64. Describes the plane image generator 260 generates the output image of the plane image by the perspective projection of the three-dimensional model having the omnidirectional image attached to the inner surface of the sphere. The image generation parameter during the perspective projection is determined according to the input value. Para. 64, describes: according to the pan, tilt and zoom designation values determined as a result of the operation for changing the display range, the changed image-processing parameter is decided, and the process proceeds to step S102. In the following step S102, the generation process of the plane image is performed according to the image-processing parameter after being changed by the plane image generator 260. In step S103, the image display area 310 of the image viewer surface 300 is updated by the image output part 262 with the plane image newly generated according to the user operation).
Regarding claim 7, Shohara discloses:
wherein the position image is an image representing a second virtual camera having an imaging area determined by an angle of view representing the second predetermined-area image, the second predetermined-area image being displayed at the second image capturing position in the three-dimensional image display area, (at least refer to fig. 6-8 and paragraphs 54, 64. Describes the plane image generator 260 generates the output image of the plane image by the perspective projection of the three-dimensional model having the omnidirectional image attached to the inner surface of the sphere. The image generation parameter during the perspective projection is determined according to the input value. Para. 64, describes: according to the pan, tilt and zoom designation values determined as a result of the operation for changing the display range, the changed image-processing parameter is decided, and the process proceeds to step S102. In the following step S102, the generation process of the plane image is performed according to the image-processing parameter after being changed by the plane image generator 260. In step S103, the image display area 310 of the image viewer surface 300 is updated by the image output part 262 with the plane image newly generated according to the user operation).
Regarding claim 8, Shohara discloses:
Wherein the circuitry is configured to superimpose a plurality of position images including the position image on the three-dimensional image display area at respective second image capturing positions, (at least refer to fig. 6-8 and paragraphs 54, 64. Describes the plane image generator 260 generates the output image of the plane image by the perspective projection of the three-dimensional model having the omnidirectional image attached to the inner surface of the sphere. The image generation parameter during the perspective projection is determined according to the input value. Para. 64, describes: according to the pan, tilt and zoom designation values determined as a result of the operation for changing the display range, the changed image-processing parameter is decided, and the process proceeds to step S102. In the following step S102, the generation process of the plane image is performed according to the image-processing parameter after being changed by the plane image generator 260. In step S103, the image display area 310 of the image viewer surface 300 is updated by the image output part 262 with the plane image newly generated according to the user operation).
Regarding claim 10, Shohara discloses:
Wherein the circuitry is configured to generate the screen so as to display a corresponding predetermined area in the three-dimensional image display area, the corresponding predetermined area corresponding to the second predetermined area of the second predetermined-area image displayed in the second captured image display area, (at least refer to fig. 6-8 and paragraphs 54, 64. Describes the plane image generator 260 generates the output image of the plane image by the perspective projection of the three-dimensional model having the omnidirectional image attached to the inner surface of the sphere. The image generation parameter during the perspective projection is determined according to the input value. Para. 64, describes: according to the pan, tilt and zoom designation values determined as a result of the operation for changing the display range, the changed image-processing parameter is decided, and the process proceeds to step S102. In the following step S102, the generation process of the plane image is performed according to the image-processing parameter after being changed by the plane image generator 260. In step S103, the image display area 310 of the image viewer surface 300 is updated by the image output part 262 with the plane image newly generated according to the user operation).
Regarding claim 12, Shohara discloses:
Wherein in a case where the plurality of position images representing the respective second image capturing positions are present within a predetermined range from the first image capturing position, the circuitry is configured to select, from among the plurality of position images, a position image representing the second image capturing position that is closet to the first image capturing position, and cause the screen to include the second captured image display area displaying the second predetermined-area image corresponding to the second image capturing position represented by the selected position image, (at least refer to fig. 6-8 and paragraphs 54, 64. Describes the plane image generator 260 generates the output image of the plane image by the perspective projection of the three-dimensional model having the omnidirectional image attached to the inner surface of the sphere. The image generation parameter during the perspective projection is determined according to the input value. Para. 64, describes: according to the pan, tilt and zoom designation values determined as a result of the operation for changing the display range, the changed image-processing parameter is decided, and the process proceeds to step S102. In the following step S102, the generation process of the plane image is performed according to the image-processing parameter after being changed by the plane image generator 260. In step S103, the image display area 310 of the image viewer surface 300 is updated by the image output part 262 with the plane image newly generated according to the user operation).
Shohara and Sawada do not explicitly disclose:
a position image representing the second image capturing position that is closet to the first image capturing position
It has been held that a recitation with respect to the manner in which a claimed apparatus is intended to be employed does not differentiate the claimed apparatus from a prior art apparatus satisfying the claimed structural limitations. Ex parte Masham, 2 USPQ2d 1647 (1987)
Regarding claim 13, Shohara discloses:
Wherein the circuitry is configured to receive an instruction to select a particular position image from among the plurality of position images, and cause the screen to include the second captured image display area displaying the second predetermined-area image corresponding to the selected position image, (at least refer to fig. 6-8 and paragraphs 54, 64. Describes the plane image generator 260 generates the output image of the plane image by the perspective projection of the three-dimensional model having the omnidirectional image attached to the inner surface of the sphere. The image generation parameter during the perspective projection is determined according to the input value. Para. 64, describes: according to the pan, tilt and zoom designation values determined as a result of the operation for changing the display range, the changed image-processing parameter is decided, and the process proceeds to step S102. In the following step S102, the generation process of the plane image is performed according to the image-processing parameter after being changed by the plane image generator 260. In step S103, the image display area 310 of the image viewer surface 300 is updated by the image output part 262 with the plane image newly generated according to the user operation).
Regarding claim 14, Shohara discloses:
Wherein the circuitry is configured to change a display mode of the selected position image, (at least refer to fig. 10-11 and paragraph 71. Describes the plane image generator 260 adopts a configuration which obtains an appropriate display effect suitable to be monitored by a viewer by changing the image generation parameter of the display model through a single projection method).
Sawada teaches:
Wherein the circuitry is configured to change a display mode of the selected position image, (at least refer to fig. 1 and paragraph 90. Describes Depending on the display method, an image that can be displayed on the display screen of the display at a time is also the wide-view image as long as the image has a viewing angle in a wide range in response to the display method being switched to a predetermined display method (such as a display mode or enlargement or reduction) or changed)
Regarding claim 16, Shohara discloses:
the information processing apparatus of claim 1; and a display terminal communicably connected with the information processing apparatus and including a display that displays the screen, (at least refer to fig. 1 and paragraph 26. Describes the image in a predetermined format obtained by the omnidirectional imaging device 110 is sent to the image processors 120 to 124 via wireless communication and displayed on the display device provided in the image processors 120 to 124 after a predetermined image process).
Allowable Subject Matter
Claims 4-5, 9, 11, and 15 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to IFEDAYO B ILUYOMADE whose telephone number is (571)270-7118. The examiner can normally be reached Monday-Friday.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Matthew Eason can be reached at 5712707230. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/IFEDAYO B ILUYOMADE/Primary Examiner, Art Unit 2624 03/05/2026