DETAILED ACTION
Claims 1-23 are pending in the present application.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Acknowledgment is made of applicant's claim for foreign priority under 35 U.S.C. 119(a)-(d). The certified copy of Japan patent application number JP2021-190451 filed on 11/25/2021 has been received and made of record.
Information Disclosure Statement
The information disclosure statements (IDS) submitted on 04/23/2024 and 02/26/2025 are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claim 23 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter.
Claim 23 describes a computer product, it appears that said claims, taken as a whole, read on computer listings per se.
Computer programs claimed as computer listings per se, i.e., the descriptions or expressions of the programs, are not physical "things." They are neither computer components nor statutory processes, as they are not "acts" being performed. Such claimed computer programs do not define any structural and functional interrelationships between the computer program and other claimed elements of a computer which permit the computer program's functionality to be realized. In contrast, a claimed non-transitory computer-readable medium encoded with a computer program is a computer element which defines structural and functional interrelationships between the computer program and the rest of the computer which permit the computer program's functionality to be realized, and is thus statutory. See Lowry, 32 F.3d at 1583-84, 32 USPQ2d at 1035.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are:
“a position estimation unit configured to estimate…” and “a marker control unit configured to control …” in claim 1.
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-23 is/are rejected under 35 U.S.C. 103 as being unpatentable over U.S. PGPubs 2007/0248283 to Mack et al. in view of U.S. PGPubs 2020/0143592 to Cordes et al., further in view of Scherfgen (David Scherfgen: "Camera-Based 3D Pointing Approach Using Dynamic On-Screen Markers" In: "Camera-Based 3D Pointing Approach Using Dynamic On-Screen Markers", 15 June 2015 (2015-06-15), XP055368421, pages 1-145).
.
Regrading claim 1, Mack et al. teach an information processing apparatus comprising (par 0010):
a position estimation unit configured to estimate a position and a posture of an imaging camera configured to image an image by using a plurality of markers existing on a display surface of the display device (par 0010, “the processor receiving the captured view of at least one of the tracking markers from the tracking camera, the processor processing the captured view to determine a coordinate position of the scene camera, by identifying in the captured view at least one of the tracking markers by the identifying indicia, and determining the coordinate position of the scene camera relative to the at least one identified tracking marker. A feature of this embodiment includes that only one of the tracking markers in the captured view is needed to determine the coordinate position of the scene camera”).
But Mack et al. keep silent for teaching an imaging camera configured to image an image displayed on a display device as a background.
In related endeavor, Cordes et al. teach an imaging camera configured to image an image displayed on a display device as a background (par 0082, “Scenery images 214 can also provide background for the video content captured by a taking camera 112 (e.g., a visible light camera). Taking camera 112 can capture a view of performance area 202 from a single perspective “, par 0086, “the images inside the frustum of the taking camera 112 can be at a higher resolution than the images outside the frustum. In some embodiments, the images displayed outside the frustum of the camera can be relatively basic scenery images (e.g., blue sky, green grass, gray sea, or brown dirt.) In some instances the scenery images can be completely static. In other instances the scenery images 214 can dynamically change over time providing a more realistic background for the performance in the immersive environment 200 “, par 0090, “The perspective-correct rendering can be completely independent from the global-view render and can include performers, props and background scenery within the frustum (e.g., frustum 318) of the taking camera 112. The perspective-correct rendering (block 504) represents a portion of the virtual environment and can thought of as a patch that can be displayed on a portion of displays 104. As the global view can be captured by a virtual spherical camera, discrepancies can exist for images displayed on the displays in the background from the spherical camera as compared with images that captured within the frustum of the taking camera. Therefore, a patch can be created to correct the images in the background displays that appear within the frustum of the taking camera. In this way as the taking camera captures the one or more images with actors, props, and background, the background appears to be perspective-correct and do not move abnormally due to movement of the taking camera”).
It would have been obvious to a person of ordinary skill in the art at the time before the effective filing data of the claimed invention to modified Mack et al. to include an imaging camera configured to image an image displayed on a display device as a background as taught by Cordes et al. to present images in real-time via an immersive environment in which a performance area on the one or more displays within a frustum of the camera are generated from a first render from a location and perspective of the camera that changes based on movement of the camera and images of the virtual environment outside the frustum of the camera are generated from a second render from a global-view perspective that does not change based on movement of the camera to create an immersive environment (sometimes referred to as an immersive “cave” or immersive “walls”) for performances that take place within the performance area to allow an actor or actress performing within performance area can appear to be in the virtual environment.
But Mack et al. as modified by Cordes et al. keep silent for teaching a marker control unit configured to control display or non-display of each of the markers on a basis of a viewing angle of the imaging camera determined from the position and the posture of the imaging camera.
In related endeavor, Scherfgen teaches a position estimation unit configured to estimate a position and a posture of an imaging camera by using a plurality of markers existing on a display surface of the display device (Fig 4.5, 4.1.3: prior art describes estimating the 6-DoF pose (position and posture) of a camera using dynamic on-screen markers displayed on a surface: "A marker pattern is defined by a 20 marker image and a set of marker feature points within it (see Figure 4.2 for an example). The marker image is composed of textured 20 triangles (three vertices, each with a position u, v, texture coordinates s, t and a color). The marker image is displayed on top of the application content so that it can be detected in camera images. The detection process outputs the pixel locations of the detected marker feature points in the camera image. Ideally, all feature points are detected and located with sub-pixel accuracy. The tracking system computes the camera pose from the correspondences between the located feature points' image coordinates and the corresponding world space points on the screen surfaces” …determine camera pose and location based on displayed virtual marker); and
a marker control unit configured to control display or non-display of each of the markers on a basis of a viewing angle of the imaging camera determined from the position and the posture of the imaging camera (Fig 4.3 and 4.5, section 4.1.3; prior art describes controlling the display of dynamic markers based on the position and orientation of the camera, allowing it to adapt to different scenarios. The camera pose (i.e. viewing angle) influences the projection of the markers on the display array. Depending on the camera pose, a marker is not displayed anymore on the previous display but is instead moved to a new display within the new field of view of the camera” ….display virtual marker for determining camera pose and location).
It would have been obvious to a person of ordinary skill in the art at the time before the effective filing data of the claimed invention to modified Mack et al. as modified by Cordes et al. to include a marker control unit configured to control display or non-display of each of the markers on a basis of a viewing angle of the imaging camera determined from the position and the posture of the imaging camera as taught by Scherfgen to control the display of dynamic markers based on the position and orientation of the camera and allow it to adapt to different scenarios.
Regrading claim 2, Mack et al. as modified by Cordes et al. and Scherfgen teach all the limitation of claim 1, and further teach wherein the marker control unit performs control such that the marker within the viewing angle is not displayed and the marker outside the viewing angle is displayed (Mack et al.: par 0005, “Many existing systems require the use of a special background with embedded markers that enable the computer to calculate the camera's position in the virtual scene by using a marker detection method. These markers can interfere with the keying process, which typically performs best with a seamless background of the same color”, par 0023, “A tracking camera 10 is attached to the scene camera 30 and oriented so that some or all of a tracking marker pattern 20 is within its field of view 15”, Scherfgen: Fig 4.3, section 2.1.2-2.1.4, and 4.1.3, discloses the feature of marker control adapting dynamically to camera orientation, which implicitly suggests adjustments based on viewing angles and display conditions).
Regrading claim 3, Mack et al. as modified by Cordes et al. and Scherfgen teach all the limitation of claim 1, and further teach wherein the marker control unit performs control such that the marker in an inner frustum set in an area including the viewing angle on a basis of the viewing angle is not displayed and the marker outside the inner frustum is displayed (Mack et al.: par 0005, “Many existing systems require the use of a special background with embedded markers that enable the computer to calculate the camera's position in the virtual scene by using a marker detection method. These markers can interfere with the keying process, which typically performs best with a seamless background of the same color”, par 0023, “A tracking camera 10 is attached to the scene camera 30 and oriented so that some or all of a tracking marker pattern 20 is within its field of view 15”, Scherfgen: Fig 4.3, section 2.1.2-2.1.4, and 4.1.3, discloses the feature of marker control adapting dynamically to camera orientation, which implicitly suggests adjustments based on viewing angles).
Regrading claim 4, Mack et al. as modified by Cordes et al. and Scherfgen teach all the limitation of claim 1, and further teach wherein the marker control unit performs control such that, among time-divided frames of the image displayed on the display device, the marker is not displayed in the frame captured by the imaging camera, and the marker is displayed in the frame not captured by the imaging camera (Mack et al.: par 0005, “Many existing systems require the use of a special background with embedded markers that enable the computer to calculate the camera's position in the virtual scene by using a marker detection method. These markers can interfere with the keying process, which typically performs best with a seamless background of the same color”, Fig 1, par 0023, “A tracking camera 10 is attached to the scene camera 30 and oriented so that some or all of a tracking marker pattern 20 is within its field of view 15”, Scherfgen: Fig 4.3, section 2.1.2-2.1.4, and 4.1.3, discloses the feature of marker control adapting dynamically to camera orientation, which implicitly suggests adjustments based on viewing angles).
Regrading claim 5, Mack et al. as modified by Cordes et al. and Scherfgen teach all the limitation of claim 1, and further teach wherein the marker control unit controls a size of the marker according to a distance from the imaging camera to the marker (Mack et al.: par 0030, “the size of the individual tracking markers 22 in tracking marker pattern 20 becomes important to prevent sudden large jumps in the position and orientation data derived from their visibility to tracking camera 10. A larger marker provides more accurate tracking data, as the increased size provides the algorithms used in ARToolkit with more accurate data to derive relative position and orientation with. However, too large a marker means that fewer patterns are visible to tracking camera 10 at any given time”, Scherfgen: Fig 4.3, section 2.1.2 and 4.1.3, additionally defines the feature that the marker control unit controls the size, hue, or distribution of markers based on camera distance, displayed hue, or display curvature. Prior art (figure 4.3) discloses at least that the marker size is adapted to the camera distance).
Regrading claim 6, Mack et al. as modified by Cordes et al. and Scherfgen teach all the limitation of claim 1, and Scherfgen further teaches wherein the marker control unit controls a hue of the marker according to a hue of an image displayed on the display surface on which the marker exists (Fig 4.3, section 2.1.2 and 4.1.3, additionally defines the feature that the marker control unit controls the size, hue, or distribution of markers based on camera distance, displayed hue, or display curvature. Prior art (figure 4.3) discloses at least that the marker size is adapted to the camera distance).
Regrading claim 7, Mack et al. as modified by Cordes et al. and Scherfgen teach all the limitation of claim 1, and Scherfgen further teaches wherein the marker exists on the display surface in a distribution based on a curvature of the display surface of the display device (Fig 4.3, section 4.1.3, additionally defines the feature that the marker control unit controls the size, hue, or distribution of markers based on camera distance, displayed hue, or display curvature. Prior art (figure 4.3) discloses at least that the marker size is adapted to the camera distance).
Regrading claim 8, Mack et al. as modified by Cordes et al. and Scherfgen teach all the limitation of claim 1, and Scherfgen further teaches wherein the marker is an image displayed on the display surface of the display device (Fig 4.3, section 4.1.3, discloses using images with feature points on a display surface as markers for tracking purposes).
Regrading claim 9, Mack et al. as modified by Cordes et al. and Scherfgen teach all the limitation of claim 8, and further teaches wherein the marker is a feature point of a mark, a two-dimensional code, a texture, or an image displayed on the display surface (Mack et al..: Fig2, par 0028-0029, “ the tracking marker pattern 20 FIG. 2 is composed of a set of binary coded markers 22. These markers 22 are described in the National Research Council of Canada in NRC Publication number 47419, "A Fiducial Marker System Using Digital Techniques", 2004, by Dr. Mark Fiala “, Scherfgen: Fig 4.3, section 4.1.3, discloses using images with feature points on a display surface as markers for tracking purposes).
Regrading claim 10, Mack et al. as modified by Cordes et al. and Scherfgen teach all the limitation of claim 9, and further teach wherein the feature point is a feature point extracted from an image existing outside the viewing angle in the image displayed on the display device (Mack et al.: Fig 1, par 0023, “A tracking camera 10 is attached to the scene camera 30 and oriented so that some or all of a tracking marker pattern 20 is within its field of view 15. A tilt and roll sensor 14 is optionally attached to the scene camera 30”, Scherfgen: Figs 4.2 and 4.3, section 4.1.3, discloses using images with feature points on a display surface as markers for tracking purposes).
Regrading claim 11, Mack et al. as modified by Cordes et al. and Scherfgen teach all the limitation of claim 1, and further teach wherein the marker is a pixel embedded in the display surface of the display device (Mack et al.: Fig 2, par 0028-0029, “ the tracking marker pattern 20 FIG. 2 is composed of a set of binary coded markers 22. These markers 22 are described in the National Research Council of Canada in NRC Publication number 47419, "A Fiducial Marker System Using Digital Techniques", 2004, by Dr. Mark Fiala “, Scherfgen: Figs 4.2 and 4.3, section 4.1.3, display marker as small images in display surface).
Regrading claim 12, Mack et al. as modified by Cordes et al. and Scherfgen teach all the limitation of claim 1, and Scherfgen further teaches wherein the marker exists in an aperiodic distribution on the display surface (Figs 4.2 and 4.3, section 4.1.3, display markers in display surfaces).
Regrading claim 13, Mack et al. as modified by Cordes et al. and Scherfgen teach all the limitation of claim 1, and further teach wherein the image displayed on the display device is controlled according to a position and a posture of the imaging camera (Mack et al.: par 0046, “he code sample, provided in Appendix A provides the proper conversions to generate the position and orientation format needed by the engine: centimeters for X, Y, and Z positions, and degrees for X, Y, and Z rotations. When the scene camera 30 is moved, the virtual camera 120 inside the real-time three dimensional engine 100 sees both the virtual scene 120 and the proxy keyed image data 92 in matched position and orientations, and produces composited proxy images 220 “, Cordes et al.: par 0081-0082, “Scenery images 214 can also provide background for the video content captured by a taking camera 112 (e.g., a visible light camera). Taking camera 112 can capture a view of performance area 202 from a single perspective. In some embodiments, the taking camera 112 can be stationary, while in other embodiments, the taking camera 112 can be mounted to a track 110 that can move the taking camera during the performance “, par 0085-0086, “embodiments of the invention can render the portion 326 of the displays 104 that corresponds to frustum 318 as perspective-correct images that can update based on movement of the taking camera 112. For example, taking camera 112 can move during a performance as performer 210 moves or to capture the performer from a different angle. As the taking camera 112 moves, portions of the scenery images 214 within the viewing frustum 318 can be updated in accordance with the perspective of the camera …the images inside the frustum of the taking camera 112 can be at a higher resolution than the images outside the frustum. In some embodiments, the images displayed outside the frustum of the camera can be relatively basic scenery images (e.g., blue sky, green grass, gray sea, or brown dirt.) In some instances the scenery images can be completely static. In other instances the scenery images 214 can dynamically change over time providing a more realistic background for the performance in the immersive environment 200 “, par 0101, “FIGS. 7 and 8 depict a performer 210 in a performance area 102 surrounded at least partially by multiple displays 104 that display scenery images 214 to be captured by the multiple taking cameras. The multiple taking cameras (shown as a first taking camera 112a and a second taking camera 112b) can be directed at a performance area 102 (including the virtual environment presented on the displays 104 (e.g., the LED or LCD display walls) to concurrently capture images “, Scherfgen: Fig 4.5, section 4.1.3, discloses dynamic marker adaptation to the camera).
Regrading claim 14, Mack et al. as modified by Cordes et al. and Scherfgen teach all the limitation of claim 1, and further teach wherein the display device is provided at each of a plurality of positions in an environment where the imaging camera is installed, and the marker exists on a display surface of a display device configured to display an image of a fixed viewpoint among a plurality of the display devices (Mack et al.: (Mack et al.: Figs 1 and 2, par 0023, “The scene camera 30 is connected to a computer 70 by a scene camera data cable 32. A tracking camera 10 is attached to the scene camera 30 and oriented so that some or all of a tracking marker pattern 20 is within its field of view 15”, Cordes et al.: Fig 1, par 0079, “motion cameras 122 can be used to track the movement of the taking camera 112 and provide a location of the taking camera to content production system 100 as part of the process that determines what portion of displays 104 are rendered from the tracked position of and the perspective of the taking camera”, par 0082, “Scenery images 214 can also provide background for the video content captured by a taking camera 112 (e.g., a visible light camera) “, Scherfgen: Fig 4.3 and 4.5, section 4.1.3; prior art describes controlling the display of dynamic markers based on the position and orientation of the camera, allowing it to adapt to different scenarios. The camera pose (i.e. viewing angle) influences the projection of the markers on the display array. Depending on the camera pose, a marker is not displayed anymore on the previous display but is instead moved to a new display within the new field of view of the camera”).
Regrading claim 15, Mack et al. as modified by Cordes et al. and Scherfgen teach all the limitation of claim 14, and further teach wherein the display device configured to display the image of the fixed viewpoint is provided at least on a ceiling of the environment (Mack et al.: Fig 1, par 0023, “The scene camera 30 is connected to a computer 70 by a scene camera data cable 32. A tracking camera 10 is attached to the scene camera 30 and oriented so that some or all of a tracking marker pattern 20 is within its field of view 15”, Cordes et al.: par 0004, par 0009, “an immersive content system that can be used in the production of movies or other video content can include a stage or performance area that is at least partially enclosed with one or more walls and/or a ceiling each of which can be covered with display screens “, par 0072, “content production system 100 can further include one or more displays 104 as a ceiling on performance area 102 and/or as part of the floor of the performance area “, Fig 1, par 0079, “motion cameras 122 can be used to track the movement of the taking camera 112 and provide a location of the taking camera to content production system 100 as part of the process that determines what portion of displays 104 are rendered from the tracked position of and the perspective of the taking camera”, par 0082, “Scenery images 214 can also provide background for the video content captured by a taking camera 112 (e.g., a visible light camera) “, Scherfgen: Fig 4.3, section 2.1.2-2.1.4, and 4.1.3, discloses the feature of marker control adapting dynamically to camera orientation, which implicitly suggests adjustments based on viewing angles and display conditions).
Regrading claim 16, Mack et al. as modified by Cordes et al. and Scherfgen teach all the limitation of claim 1, and further teaches wherein the position estimation unit estimates a position and a posture of the imaging camera on a basis of a distribution image of the plurality of markers captured by a tracking camera provided integrally with the imaging camera (Mack et al.: Figs 1-2, par 0023-0026, “A tracking camera 10 is attached to the scene camera 30 and oriented so that some or all of a tracking marker pattern 20 is within its field of view 15. A tilt and roll sensor 14 is optionally attached to the scene camera 30 …. The tracking camera 10 is used to collect images of the tracking marker pattern 20. The image quality required for tracking the tracking marker 10 is lower than the image quality generally required for the scene camera 30, enabling the use of a lower cost tracking camera 10”, Scherfgen: Fig 4.5, section 4.1.3, discloses that the position estimation unit estimates the camera position and pose based on a distribution of markers on the display).
Regrading claim 17, Mack et al. as modified by Cordes et al. and Scherfgen teach all the limitation of claim 16, and further teach wherein the position estimation unit estimates a position and a posture of the imaging camera on a basis of a two-dimensional position of the marker in a distribution image of the plurality of markers captured by the tracking camera and a three- dimensional position of the marker (Mack et al.: Figs 1-2, par 0023-0026, “A tracking camera 10 is attached to the scene camera 30 and oriented so that some or all of a tracking marker pattern 20 is within its field of view 15. A tilt and roll sensor 14 is optionally attached to the scene camera 30 …. The tracking camera 10 is used to collect images of the tracking marker pattern 20. The image quality required for tracking the tracking marker 10 is lower than the image quality generally required for the scene camera 30, enabling the use of a lower cost tracking camera 10”, Scherfgen: Figs 4.3 and 4.5, section 4.1.3, discloses that the position estimation unit estimates the camera position and pose based on a distribution of markers on the display).
Regrading claim 18, Mack et al. as modified by Cordes et al. and Scherfgen teach all the limitation of claim 17, and Scherfgen further teaches wherein the three-dimensional position of the marker is derived on a basis of information regarding a shape of the display surface of the display device and information indicating a position of the marker on the display surface (Figs 4.3 and 4.5, section 4.1.3, discloses that the position estimation unit estimates the camera position and pose based on a distribution of markers on the display).
Regrading claim 19, Mack et al. as modified by Cordes et al. and Scherfgen teach all the limitation of claim 17, and Scherfgen further teaches wherein the three-dimensional position of the marker is derived on a basis of information indicating a distance to the marker measured by a distance measurement device provided in the tracking camera and a distribution image of the marker (Figs 4.3 and 4.5, section 4.1.3, discloses that the position estimation unit estimates the camera position and pose based on a distribution of markers on the display).
Regrading claim 20, Mack et al. as modified by Cordes et al. and Scherfgen teach all the limitation of claim 16, and Mack et al. further teach wherein the viewing angle of the imaging camera and a viewing angle of the tracking camera are different from each other (Mack et al.: Fig 1, par 0023, “The scene camera 30 is connected to a computer 70 by a scene camera data cable 32. A tracking camera 10 is attached to the scene camera 30 and oriented so that some or all of a tracking marker pattern 20 is within its field of view 15”, Cordes et al.: Fig 1, par 0079, “motion cameras 122 can be used to track the movement of the taking camera 112 and provide a location of the taking camera to content production system 100 as part of the process that determines what portion of displays 104 are rendered from the tracked position of and the perspective of the taking camera”).
Regrading claim 21, Mack et al. as modified by Cordes et al. and Scherfgen teach all the limitation of claim 1, and further teach the display device in which the plurality of markers exists on the display surface; the imaging camera configured to image an image displayed on the display device as a background; and a tracking camera that is provided integrally with the imaging camera and captures images of the plurality of markers (Mack et al.: Fig 1, par 0023, “The scene camera 30 is connected to a computer 70 by a scene camera data cable 32. A tracking camera 10 is attached to the scene camera 30 and oriented so that some or all of a tracking marker pattern 20 is within its field of view 15”, Cordes et al.: Fig 1, par 0079, “motion cameras 122 can be used to track the movement of the taking camera 112 and provide a location of the taking camera to content production system 100 as part of the process that determines what portion of displays 104 are rendered from the tracked position of and the perspective of the taking camera”, par 0082, “Scenery images 214 can also provide background for the video content captured by a taking camera 112 (e.g., a visible light camera) “, Scherfgen: Fig 4.3, section 4.1.3, provide markers on the display surfaces).
Regarding claim 22, the method claim 22 is similar in scope to claim 1 and is rejected under the same rational.
Regarding claim 23, the claim 23 is similar in scope to claim 1 and is rejected under the same rational.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Jin Ge whose telephone number is (571)272-5556. The examiner can normally be reached 8:00 to 5:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jason Chan can be reached at (571)272-3022. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
JIN . GE
Examiner
Art Unit 2619
/JIN GE/Primary Examiner, Art Unit 2619