DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier.
Such claim limitation(s) is/are:
estimation information generation unit, free viewpoint image generation unit, three-dimensional image generation unit and output image generation unit in claims 1-7 and 13.
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Specification
Title of the Invention
The title of the invention is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed.
The following title is suggested:
IMAGE PRODUCTION SYSTEM, IMAGE PRODUCTION METHOD, AND PROGRAM THAT GENERATES AN OUTPUT IMAGE ON A BASIS OF A FREE VIEWPOINT IMAGE
Abstract
The abstract of the disclosure is objected to because it contains 170 words. A corrected abstract of the disclosure is required and must be presented on a separate sheet, apart from any other text. See MPEP § 608.01(b).
Applicant is reminded of the proper content of an abstract of the disclosure.
A patent abstract is a concise statement of the technical disclosure of the patent and should include that which is new in the art to which the invention pertains. The abstract should not refer to purported merits or speculative applications of the invention and should not compare the invention with the prior art.
If the patent is of a basic nature, the entire technical disclosure may be new in the art, and the abstract should be directed to the entire disclosure. If the patent is in the nature of an improvement in an old apparatus, process, product, or composition, the abstract should include the technical disclosure of the improvement. The abstract should also mention by way of example any preferred modifications or alternatives.
Where applicable, the abstract should include the following: (1) if a machine or apparatus, its organization and operation; (2) if an article, its method of making; (3) if a chemical compound, its identity and use; (4) if a mixture, its ingredients; (5) if a process, the steps.
Extensive mechanical and design details of an apparatus should not be included in the abstract. The abstract should be in narrative form and generally limited to a single paragraph within the range of 50 to 150 words in length.
See MPEP § 608.01(b) for guidelines for the preparation of patent abstracts.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-15 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claim(s) does/do not fall within at least one of the four categories of patent eligible subject matter because claim 1 is directed to: An image production system comprising: the steps of generating, generating, generating and generating which amount to nothing more than software instructions; software instructions are non-statutory under U.S.C. 101.
Claims 2-13 depend from claim 1 and contain more software instructions, therefore they are rejected under the same rationale.
Claim 14 is directed to: An image production method comprising the steps of generating, generating, generating and generating; therefore claim 14 has the same problem as claim 1 and is rejected under the same rationale.
Claim 15 is directed to: A program for causing an information processing device in an image production system and is therefore a program per se. A program per se is non-statutory under 35 U.S.C. 101.
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
Claims (1, 14 and 15) are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1, 15 and 16 of copending Application No. 18,713,227 (hereinafter 227’) in view of OGASAWARA (US2023/0033201A1) in view of LEE (US2014/0354777A1).
This is a provisional nonstatutory double patenting rejection.
Claims 2 and 4 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claims 4 and 13 of copending Application No. 18/713,227 (reference application). Although the claims at issue are not identical, they are not patentably distinct from each other because they are only a slightly narrower version of the co-pending claims.
This is a provisional nonstatutory double patenting rejection because the patentably indistinct claims have not in fact been patented.
Regarding claims 1, 14 and 15, (co-pending 227’) teaches all but the last limitations related to “generating an output image on a basis of the free viewpoint image and the three-dimensional image”, however the analogous prior art OGASAWARA teaches:
generating an output image on a basis of the free viewpoint image (OGASAWARA: par. 26).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine generating an output image on a basis of the free viewpoint image as shown in OGASAWARA with 227’ for the benefit of addressing a shortcoming in the prior art in that in a virtual viewpoint image, it is assumed that a marker is added to a target to be focused in a scene. If a marker is input to the virtual viewpoint image, the marker is displayed at an appropriate position when viewed from a viewpoint upon inputting the marker. However, if the viewpoint is switched to another viewpoint, the marker may be displayed at an unintended position. As described above, when rendering additional information such as a marker for a virtual viewpoint image, the rendered additional information may be displayed at an unintended position [3].
227’ in view of OGASAWARA don’t teach, however the analogous prior art LEE teaches: generating an output image on a basis of the three-dimensional image (LEE: par. 13).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine generating an output image on a basis of the three-dimensional image as shown in LEE with the previous combination for the benefit of fulfilling a need for an image acquisition technology using a light field camera, which is capable of simultaneously providing 2D and 3D images having maximum resolution and improving resolution of element images, that is, 3D spatial information [8].
Instant case: 18/713,224 Co-pending case: 18/713,227
[Claim 1] An image production system comprising: an estimation information generation unit that generates estimation information regarding a subject on a basis of at least one of a captured image or sensor information;
a free viewpoint image generation unit that generates a three-
dimensional model of the subject on a basis of a plurality of pieces of captured image data obtained by simultaneously capturing images from a plurality of viewpoints, and generates a free viewpoint image that is an image of an arbitrary viewpoint for the subject using the three-dimensional model;
a three-dimensional image generation unit capable of generating a three-dimensional image on a basis of the estimation information and the three-dimensional model of the subject; and
an output image generation unit that generates an output image on a basis of the free viewpoint image generated by the free viewpoint image generation unit and the three-dimensional image generated by the three-dimensional image generation unit.
[Claim 2] The image production system according to claim 1, wherein the output image
generation unit generates an output image by selectively
using a live- action image including a free viewpoint image generated by the free viewpoint image generation unit and a three-dimensional image generated by the three-dimensional image generation unit.
[Claim 4] The image production system according to claim 1, wherein the output image generation unit
generates an output image by combining a live-action image
including a free
viewpoint image generated by the free viewpoint image generation unit and a three-dimensional image generated by the three-dimensional image generation unit.
[Claim 14] An image production method comprising: generating estimation information regarding a subject on a basis of at least one of a captured image or sensor information;
generating a three-dimensional model of the subject on a basis of a
plurality of pieces of captured image data obtained by simultaneously capturing images from a plurality of viewpoints, and generating a free viewpoint image that is an image of an arbitrary viewpoint for the subject using the three- dimensional model;
generating a three-dimensional image on a basis of the estimation information and the three-dimensional model of the subject; and
generating an output image on a basis of the free viewpoint image and the three-dimensional image.
[Claim 15] A program for causing an information processing device in an image production system to execute processing of:
generating estimation information regarding a subject on a basis of at least one of a captured image or sensor information;
generating a three-dimensional model of the subject on a basis of a plurality of pieces of captured
image data obtained by simultaneously capturing images from a plurality of viewpoints, and generating a free viewpoint image that is an image of an arbitrary viewpoint for the subject using the three- dimensional model;
generating a three-dimensional image on a basis of the estimation information and the three-dimensional model of the subject; and
generating an output image on a basis of the free viewpoint image and the three-dimensional image.
[Claim 1] An image creation system, comprising:an estimation information generation unit that generates estimation information regarding a subject on a basis of at least one of a captured image or
sensor information;
a free viewpoint image generation unit that generates a first three-dimensional model, which is a three-dimensional model of the subject, on a basis of a plurality of pieces of captured image data obtained by simultaneously capturing images from a plurality of viewpoints, and generates a free viewpoint image, which is an image of an arbitrary viewpoint of the subject, using the first three-dimensional model; and
a three-dimensional image generation unit capable of generating a three-dimensional image on a basis of the estimation information and a second three-dimensional model, which is a virtual three-dimensional model of the subject.
[Claim 4] The image creation system according to claim 1, further comprising a two-dimensional image generation unit that generates a two-dimensional image by selectively using a live-action image including a free viewpoint image generated by the free viewpoint image generation unit and a three-dimensional image generated by the three-dimensional image generation unit.
[Claim 13] The image creation system according to The image creation system according to
wherein the system
generates an image obtained by combining an image based on the estimation information with a live-action image including a free viewpoint image generated by the free viewpoint image generation unit or a three-dimensional image generated by the three-dimensional image generation unit.
[Claim 15] An image creation method, comprising the steps of:generating estimation information regarding a subject on a basis of at least one of a captured image or sensor information;
generating a first three-dimensional model, which is a three-dimensional model of the subject, on a basis of a plurality of pieces of captured image data obtained by simultaneously capturing images from a plurality of viewpoints, and generating a free viewpoint image, which is an image of an arbitrary viewpoint of the subject, using the first three- dimensional model; and
generating a three-dimensional image on a basis of the estimation information and a second three-dimensional model, which is a virtual three-dimensional model of the subject.
[Claim 16] A program causing an information processing device in
an image creation system to execute processing, the processing comprising the steps of:
generating estimation information regarding a subject on a basis of at least one of a captured image or sensor information;
generating a first three-dimensional model, which is a three-dimensional model of the subject, on a basis of a plurality of pieces of captured image data obtained by simultaneously capturing images from a plurality of viewpoints, and generating a free viewpoint image, which is an image of an arbitrary viewpoint of the subject, using the first three- dimensional model; and
generating a three-dimensional image on a basis of the estimation information and a second three-dimensional model, which is a virtual three-dimensional model of the subject.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1, 14 and 15 is/are rejected under 35 U.S.C. 103 as being unpatentable over KUMAR (WO 2019/021315A1) in view of OGASAWARA (US2023/0033201A1) in view of LUO (WO 2018/227527A1) in view of LEE (US2014/0354777A1).
Regarding claim 1, KUMAR teaches:
[Claim 1] An image production system comprising (KUMAR: see abstract):
an estimation information generation unit that generates estimation information regarding a subject on a basis of at least one of a captured image or sensor information (KUMAR: pars. 9-10);
a three-dimensional image generation unit capable of generating a three-dimensional image on a basis of the estimation information (KUMAR: pars. 10 and 42-43);
KUMAR doesn’t teach however the analogous prior art OGASAWARA teaches:
a free viewpoint image generation unit that generates a three-dimensional model of the subject on a basis of a plurality of pieces of captured image data obtained by simultaneously capturing images from a plurality of viewpoints, and generates a free viewpoint image that is an image of an arbitrary viewpoint for the subject using the three-dimensional model (OGASAWARA: par. 26);
an output image generation unit that generates an output image on a basis of the free viewpoint image generated by the free viewpoint image generation unit (OGASAWARA: par. 26).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine a free viewpoint image generation unit that generates a three-dimensional model of the subject on a basis of a plurality of pieces of captured image data obtained by simultaneously capturing images from a plurality of viewpoints, and generates a free viewpoint image that is an image of an arbitrary viewpoint for the subject using the three-dimensional model;
an output image generation unit that generates an output image on a basis of the free viewpoint image generated by the free viewpoint image generation unit as shown in OGASAWARA with KUMAR for the benefit of addressing a shortcoming in the prior art related to: in recent years, a technique of generating, from a plurality of images obtained by image capturing using a plurality of image capturing devices, an image (virtual viewpoint image) in which a captured scene is viewed from an arbitrary viewpoint has received a great deal of attention. Even in such a virtual viewpoint image, it is assumed that a marker is added to a target to be focused in a scene. If a marker is input to the virtual viewpoint image, the marker is displayed at an appropriate position when viewed from a viewpoint upon inputting the marker. However, if the viewpoint is switched to another viewpoint, the marker may be displayed at an unintended position. As described above, when rendering additional information such as a marker for a virtual viewpoint image, the rendered additional information may be displayed at an unintended position [3].
The previous combination of KUMAR and OGASAWARA don’t teach however the analogous prior art LUO teaches:
generating a three-dimensional image on a basis of a three-dimensional model of the subject (LUO: pg. 20 see lines 3-11).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine generating a three-dimensional image on a basis of a three-dimensional model of the subject as shown in LUO with the previous combination for the benefit of: in order to solve the problems existing in the prior art in measuring the distance between any two points in the space, the present application provides a spatial distance measuring device including at least one sensor unit and a measuring method thereof. Further, the present application provides a scanning device based on the measuring device and the measuring method, which realizes flexible and effective scanning of spatial objects [pg. 5 lines 4-8].
The previous combination of KUMAR, OGASAWARA and LUO don’t teach however the analogous prior art LEE teaches:
generates an output image on a basis of the three-dimensional image generated by the three-dimensional image generation unit (LEE: par. 13).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine generates an output image on a basis of the three-dimensional image generated by the three-dimensional image generation unit as shown in LEE with the previous combination for the benefit of fulfilling a need for an image acquisition technology using a light field camera, which is capable of simultaneously providing 2D and 3D images having maximum resolution and improving resolution of element images, that is, 3D spatial information [8].
Claim 14 is analogous to claim 1 and is therefore rejected using the same rationale.
Claim 14 further requires a different preamble, that is also taught by KUMAR: [Claim 14] An image production method (KUMAR: see abstract).
Claim 15 is analogous to claim 1 and is therefore rejected using the same rationale.
Claim 15 further requires a different preamble, that is also taught by KUMAR: [Claim 15] A program for causing an information processing device in an image production system to execute processing of (KUMAR: see par. 1).
Claim(s) 4-5 is/are rejected under 35 U.S.C. 103 as being unpatentable over KUMAR in view of OGASAWARA in view of LUO in view of LEE in view of SUGANO (WO 2020/121844A1).
Regarding claim 4, the previous combination of KUMAR OGASAWARA LUO and LEE and don’t teach however the analogous prior art SUGANO teaches:
[Claim 4] The image production system according to claim 1, wherein the output image generation unit generates an output image by combining a live-action image including a free viewpoint image generated by the free viewpoint image generation unit and a three-dimensional image generated by the three-dimensional image generation unit (SUGANO: fig. 1 see also pars. 15-20 and 21 [lines 1-2]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine wherein the output image generation unit generates an output image by combining a live-action image including a free viewpoint image generated by the free viewpoint image generation unit and a three-dimensional image generated by the three-dimensional image generation unit as shown in SUGANO with the previous combination for the benefit of addressing a shortcoming in the prior art related to: a method of generating a stroboscopic image showing a subject (image) captured at a plurality of times, has been proposed (e.g., refer to PTL 1). Because the stroboscopic image shows the subject at the plurality of times, the motion or the trajectory of the subject can be easily grasped.
JP 2007-259477A
According to PTL 1, however, although the motion or the trajectory of the subject viewed in one direction can be grasped, a change to an arbitrary viewpoint a user desires is not allowed.
The present technology has been made in consideration of such a situation, and is to enable easy production of video content of a stroboscopic image [pg. 76, Description, lines 1-12].
Regarding claim 5, KUMAR OGASAWARA LUO and LEE as modified by SUGANO (with the same motivation from claim 4) further teaches:
[Claim 5] The image production system according to claim 1, wherein the output image generation unit generates an output image by combining a subject image by a live-action image including a free viewpoint image generated by the free viewpoint image generation unit and a subject image by a three-dimensional image generated by the three-dimensional image generation unit (SUGANO: fig. 1 see also pars. 15-20 and 21 [lines 1-2]).
Claim(s) 13 is/are rejected under 35 U.S.C. 103 as being unpatentable over KUMAR in view of OGASAWARA in view of LUO in view of LEE in view of ROLLESTON (US2013/0238288A1).
Regarding claim 13, KUMAR teaches:
[Claim 13] The image production system according to claim 1, wherein
the three-dimensional image generation unit can generate a three-dimensional image on a basis of estimation information generated by the estimation information generation unit (KUMAR: pars. 10 and 42-43).
KUMAR doesn’t teach however the analogous prior art OGASAWARA teaches:
the free viewpoint image generation unit can generate a first three-dimensional model that is a three-dimensional model of a subject on a basis of a plurality of pieces of captured image data obtained by simultaneously capturing images from a plurality of viewpoints, and generate a free viewpoint image that is an image of an arbitrary viewpoint for the subject using the first three-dimensional model (OGASAWARA: see par. 26).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the free viewpoint image generation unit can generate a first three-dimensional model that is a three-dimensional model of a subject on a basis of a plurality of pieces of captured image data obtained by simultaneously capturing images from a plurality of viewpoints, and generate a free viewpoint image that is an image of an arbitrary viewpoint for the subject using the first three-dimensional model as shown in OGASAWARA with KUMAR for the benefit of addressing a shortcoming in the prior art related to: in recent years, a technique of generating, from a plurality of images obtained by image capturing using a plurality of image capturing devices, an image (virtual viewpoint image) in which a captured scene is viewed from an arbitrary viewpoint has received a great deal of attention. Even in such a virtual viewpoint image, it is assumed that a marker is added to a target to be focused in a scene. If a marker is input to the virtual viewpoint image, the marker is displayed at an appropriate position when viewed from a viewpoint upon inputting the marker. However, if the viewpoint is switched to another viewpoint, the marker may be displayed at an unintended position. As described above, when rendering additional information such as a marker for a virtual viewpoint image, the rendered additional information may be displayed at an unintended position [3].
The previous combination of KUMAR and OGASAWARA don’t teach however the analogous prior art LUO teaches:
generate a three-dimensional image on a basis of a three-dimensional model of the subject (LUO: pg. 20 see lines 3-11).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine generate a three-dimensional image on a basis of a three-dimensional model of the subject as shown in LUO with the previous combination for the benefit of: in order to solve the problems existing in the prior art in measuring the distance between any two points in the space, the present application provides a spatial distance measuring device including at least one sensor unit and a measuring method thereof. Further, the present application provides a scanning device based on the measuring device and the measuring method, which realizes flexible and effective scanning of spatial objects [pg. 5 lines 4-8].
The previous combination of KUMAR, OGASAWARA and LUO don’t teach however the analogous prior art ROLLESTON teaches:
The three-dimensional model is a second three-dimensional model that is a virtual three-dimensional model (ROLLESTON: par. 80).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine three-dimensional model is a second three-dimensional model that is a virtual three-dimensional model as shown in ROLLESTON with the previous combination for the benefit of addressing a shortcoming in the prior art in that: current virtual 3D rendering systems work only with single pieces, and not with collections of multiple pieces that must be spatially coordinated. There is a need for using virtual 3D rendering systems and other advanced technologies in conjunction with kit fulfillment to reduce development costs, reduce time-to-market, improve customer satisfaction, and improve efficiency of various other aspects of kit fulfillment [3].
Allowable Subject Matter
Claims 2-3 and 6-12 would be objected to (except for the 101 rejection) as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
The following is a statement of reasons for the indication of allowable subject matter:
Regarding claims 2-3 and 6-12, the prior art doesn’t teach:
[Claim 2] The image production system according to claim 1, wherein the output image generation unit generates an output image by selectively using a live- action image including a free viewpoint image generated by the free viewpoint image generation unit and a three-dimensional image generated by the three-dimensional image generation unit.
[Claim 3] The image production system according to The image production system according to wherein the output image generation unit generates an output image by selectively using a live- action image including a free viewpoint image generated by the free viewpoint image generation unit and a three-dimensional image generated by the three-dimensional image generation unit for each period.
[Claim 6] The image production system according to claim 1, wherein the output image generation unit generates an output image by selectively using a live- action image including a free viewpoint image generated by the
free viewpoint image generation unit and a three-dimensional image generated by the three-dimensional image generation unit, on a basis of a camera path of the free viewpoint image.
[Claim 7] The image production system according to claim 1, wherein the output image generation unit performs quality determination processing of a free viewpoint image, and generates an output image by selectively using a live-action image including a free viewpoint image generated by the free viewpoint image generation unit and a three-dimensional image generated by the three-dimensional image generation unit according to a result of the quality determination.
[Claim 8] The image production system according to claim 7, wherein the quality determination processing determines quality of a free viewpoint image on a basis of an arrangement relationship between a plurality of imaging devices.
[Claim 9] The image production system according to claim 8, wherein the quality determination processing determines whether or not a target subject of a free viewpoint image exists in an area within a field of view of a predetermined number or more of imaging devices on a basis of an arrangement relationship between a plurality of imaging devices.
[Claim 10] The image production system according to claim 8, wherein the quality determination processing determines a section in which zoom magnification of an imaging device is a predetermined value or more in camera path.
[Claim 11] The image production system according to claim 7, wherein the quality determination processing determines an arrangement relationship between a target subject for which a free viewpoint image is generated and another subject in an image at viewpoint specified by camera path.
[Claim 12] The image production system according to claim 11, wherein the quality determination processing determines degree of congestion of a subject around a target subject for which a free viewpoint image is generated.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
HANAMOTO (US2020/0344456A1) discloses a providing apparatus configured to provide three-dimensional geometric data to be used to generate a virtual viewpoint image receives a data request from a communication apparatus, decides which of a plurality of pieces of three-dimensional geometric data including first three-dimensional geometric data and second three-dimensional geometric data with a different quality than the first three-dimensional geometric data is to be provided to the communication apparatus from which the received data request was transmitted, and provides the three-dimensional geometric data decided on from among the plurality of pieces of three-dimensional geometric data, to the communication apparatus as a response to the received data request; ADACHI (US2020/0137371A1) discloses an image processing apparatus for generating a virtual viewpoint image based on a plurality of images obtained by capturing an object in an image-capturing region from a plurality of directions by using a plurality of image capturing apparatuses includes a viewpoint acquisition unit configured to acquire viewpoint information indicating a position of a virtual viewpoint, and a generation unit configured to generate the virtual viewpoint image corresponding to the position of the virtual viewpoint indicated by the acquired viewpoint information, the virtual viewpoint image including at least one of an image depending on a positional relationship between a boundary and the virtual viewpoint and/or an image representing the boundary, the boundary being between an inside and an outside of a predetermined region included in the image-capturing region; MAEDA (US2022/0189106A1) discloses an image processing apparatus includes an acquisition unit configured to acquire a three-dimensional shape data of an object based on images captured by a plurality of cameras, a generation unit configured to generate information based on a relationship between the three-dimensional shape data acquired by the acquisition unit and positions of the plurality of cameras, and a correction unit configured to correct the three-dimensional shape data based on the information generated by the generation unit; WANG (US2014/0340404A1) discloses the present invention relates to a method for generating 3D viewpoint video content. The method comprising the steps of receiving videos shot by cameras distributed to capture an object; forming a 3D graphic model of at least part of the scene of the object based on the videos; receiving information related to viewpoint and 3D region of interest (ROD in the object; and combining the 3D graphic model and the videos related to the 3D ROI to form a hybrid 3D video content; MATSUBAYASHI (US2019/0158801A1) discloses there is provided a display controlling apparatus which comprises: an obtaining unit configured to obtain virtual camera path information related to a movement path of a virtual viewpoint related to a virtual viewpoint video image generated based on a plurality of shot images obtained by shooting a shooting target area with a plurality of cameras; a generating unit configured to generate a virtual camera path image representing the plurality of movement paths including first and second movement paths of the virtual viewpoint, based on the virtual camera path information obtained by the obtaining unit; and a display controlling unit configured to display the virtual camera path image generated by the generating unit, on a display screen.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MAURICE L MCDOWELL, JR whose telephone number is (571)270-3707. The examiner can normally be reached Mon-Fri: 2pm-10pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Said A. Broome can be reached at 571-272-2931. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MAURICE L. MCDOWELL, JR/Primary Examiner, Art Unit 2612