DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Receipt is acknowledged that application is a National Stage application of PCT PCT/US2021/054704. Priority to PCT/US2021/054704 with a priority date of 13 October 2021 is acknowledged under 35 USC 119(e) and 37 CFR 1.78.
Information Disclosure Statement
The IDS dated 24 April 2024 has been considered and placed in the application file.
Specification - Title
The title of the invention is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed.
The following title is suggested: Improving background display during teleconferencing.
Examiners Note
While the current application has not been restricted, the amount of searching was extensive. Amendments may cause a restriction requirement, or election by original presentation, especially as regards claims 5 (a particular frequency based on a predetermined number of frames of the video signal), claim 11 (generate an intermediate frame having the low-light adjusted pixel values), claim 13 (transition frame) which are not well connected to claim 1 (backlight adjustment of the frame).
Claim Interpretation (Contingent Limitation)
Under MPEP 2111.04, Claim scope is not limited by claim language that suggests or makes optional but does not require steps to be performed, or by claim language that does not limit a claim to a particular structure. The broadest reasonable interpretation of a method (or process) claim having contingent limitations requires only those steps that must be performed and does not include steps that are not required to be performed because the condition(s) precedent are not met.
For example, assume a method claim requires step A if a first condition happens and step B if a second condition happens. If the claimed invention may be practiced without either the first or second condition happening, then neither step A or B is required by the broadest reasonable interpretation of the claim.
In this case, Claim 15 recites “in response to the frame not being a transition frame” then listing “determining, by the controller, whether the second LUT is a same LUT as the first LUT,” making the method step optional. While some citations have been provided for completeness and rapid prosecution, the method step is not required. Applicant’s comments and/or amendments relating to this issue are invited to clarify the claim language and the prosecution history.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f), is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f):
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f). The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f), is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f), because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are:
“a processing resource” in claim 1; and
“a memory resource storing non-transitory machine-readable instructions” in claim 1.
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f), they are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f), applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f).
1st Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1-2, 4, 6-10 and 12 are rejected under 35 U.S.C. 103 as obvious over US Patent Publication 2022 0270556 A1, (Neumeier et al.) in view of US Patent Publication 2021 0183020 A1, (Gollanapalli et al.).
[AltContent: textbox (Neumeier et al. Fig. 1D, showing a TV used in a video conference with luminance adjusted.)]
PNG
media_image1.png
490
755
media_image1.png
Greyscale
Claim 1
Regarding Claim 1, Neumeier et al. teach a controller ("at least one non-transitory computer readable medium having instructions stored on it which, when executed by a computer, carry out the analyses of images captured by camera 102 and the adjustment to at least one video display pixel parameter value," paragraph [0031]), comprising:
a processing resource ("TV 101 has at least one processor," paragraph [0031]); and
a memory resource storing non-transitory machine-readable instructions to cause the processing resource ("at least one non-transitory computer readable medium having instructions stored on it which, when executed by a computer, carry out the analyses of images captured by camera 102 and the adjustment to at least one video display pixel parameter value," paragraph [0031])to:
perform object detection on a frame of a plurality of frames of a video signal ("Using known techniques, such as facial recognition techniques, the at least one pixel parameter value used to determine the size of the frame of border pixels 114b is at least one pixel parameter value for those pixels in the captured image which define subjects 107 and not for the pixels defining environment 112," paragraph [0032]);
generate a bounding box around an object in the frame in response to the object detection detecting the object in the frame ("Using known techniques, such as facial recognition techniques, the at least one pixel parameter value used to determine the size of the frame of border pixels 114b is at least one pixel parameter value for those pixels in the captured image which define subjects 107 and not for the pixels defining environment 112," paragraph [0032] where the frame is a bounding box);
perform a backlight adjustment of the frame ("the video display pixel luminance (Y) values in FIG. lD are increased relative to those of FIG. lA by increasing the LED backlight intensity (IBL)," paragraph [0031]) by:
setting an adjusted pixel value for each pixel of the plurality of the pixels included in the frame based on the comparison ("In accordance with the method, adjustments to at least one video display pixel parameter value are made based on the average captured pixel intensity Yavg max for captured subject pixels exclusive of the subject's surrounding environment," paragraph [0051] and "Adjusting the LED backlight intensity changes pixel luminance (Y) and can impact the emitted color from the pixels. Thus, in certain examples, the RGB values of the adjusted pixels are also adjusted to maintain the same color balance of red, green, and blue in the emitted colors and to maintain Yavg at Ysp· The adjustments are preferably based on the current and previous values of the backlight intensity (IBLn and IBLn-i, respectively), and the current RGB values" paragraph [0069] where average captured pixel intensity is the overall pixel value and pre-generated LUT is taught by adjusted pixels are also adjusted to maintain color balance); and
generate an output frame having the adjusted pixel values for each pixel ("The adjustment to the individual pixel luminance values can affect the emitted color as perceived by the viewer," paragraph [0080]).
[AltContent: textbox (Gollanapalli et al., Fig 7A, showing a blur.)]
PNG
media_image2.png
104
160
media_image2.png
Greyscale
Neumeier et al. do not explicitly teach all of blur values.
However, Gollanapalli et al. teach comparing a pixel value and a blur value ("the threshold weight of the feature map indicates one of: an overall blur weightage of the feature map and a minimum blur feature value to be present in the feature map; comparing each pixel value of the feature map with the threshold weight corresponding to the feature map," paragraph [0011]) of each pixel of a plurality of pixels included in the frame to a database ("For example, at least one of these components, elements or units may use a direct circuit structure, such as a memory, a processor, a logic circuit, a look-up table, etc. that may execute the respective functions," paragraph [0110]).
Therefore, taking the teachings of Neumeier et al. and Gollanapalli et al. as a whole, it would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to modify “Systems and Methods for Enhancing Television Display for Video conferencing and Video Watch party Applications” as taught by Neumeier et al. to use deblurring as taught by Gollanapalli et al. The suggestion/motivation for doing so would have been that, “In the related art method, the blurred image is deblurred using deconvolution operations, which require an accurate knowledge on blur features in the blurred image. However, recovering the blur features from a single blurred image is a difficult task due to a loss of details in the image.” as noted by the Gollanapalli et al. disclosure in paragraph [0004], which also motivates combination because the combination would predictably have a better chance of deblurring a picture as there is a reasonable expectation that humans will move, causing blurring; and/or because doing so merely combines prior art elements according to known methods to yield predictable results.
The rejection of apparatus claim 1 above applies mutatis mutandis to the corresponding limitations of apparatus claim 8 and method claim 12 while noting that the rejection above cites to both device and method disclosures. Claims 8 and 12 are mapped below for clarity of the record and to specify any new limitations not included in claim 1.
Claim 2
Regarding claim 2, Neumeier et al. teach the controller of claim 1, wherein the memory resource includes instructions to cause the processing resource to select a LUT from the plurality of pre-generated LUTs based on a pixel value of pixels included in the bounding box and an overall pixel value of the plurality of pixels included in the frame ("In accordance with the method, adjustments to at least one video display pixel parameter value are made based on the average captured pixel intensity Yavg max for captured subject pixels exclusive of the subject's surrounding environment," paragraph [0051]).
Neumeier et al. is not relied upon to show look up tables.
However, Gollanapalli et al. teach wherein: the database includes a plurality of pre-generated lookup tables (LUTs) ("For example, at least one of these components, elements or units may use a direct circuit structure, such as a memory, a processor, a logic circuit, a look-up table, etc. that may execute the respective functions," paragraph [0110]); and
Neumeier et al. and Gollanapalli et al. are combined as per claim 1.
Claim 4
Regarding claim 4, Neumeier et al. teach the controller of claim 1, wherein performing the object detection includes performing facial recognition on the frame to determine whether there is a face of a subject in the frame ("Using known techniques, such as facial recognition techniques, the at least one pixel parameter value used to determine the size of the frame of border pixels 114b is at least one pixel parameter value for those pixels in the captured image which define subjects 107 and not for the pixels defining environment 112," paragraph [0032]).
Claim 6
Regarding claim 6, Neumeier et al. teach the controller of claim 1, including instructions to cause the processing resource to determine the blur value of each pixel by:
converting each pixel of the plurality of pixels from a red, green, and blue (RGB) color space to grayscale, wherein the pixel value is an RGB value ("The luminance (Y) value for a pixel is a gray scale intensity value which can also be represented as a weighted function of the R, G, and B intensities. A given luminance value can be provided at different ratios of R, G, and B to one another," paragraph [0025]); and
apply a Gaussian filter to each pixel of the plurality of pixels ("The encoder ResBlock 405 extracts and models blur information in the blurred image 401 using learned filters to generate an encoded image 406," paragraph [0083] where a gaussian filter is taught by learned filters).
Claim 7
Regarding claim 7, Neumeier et al. teach the controller of claim 1, including instructions to cause the processing resource to further perform a low-light adjustment of the frame in response to the plurality of pixels being less than a threshold pixel value ("If the average captured subject pixel luminance set point Ysp n is less than the maximum average pixel luminance Y avg max n in step 1056, control transfers to step 1062. In step 1062 the luminance values of each pixel in the picture field 106a are adjusted to achieve a value of Yavg n that is equal to Yw," paragraph [0074]).
Claim 8
Regarding claim 8, Neumeier et al. teach a non-transitory machine-readable storage medium including instructions that when executed cause a processing resource ("at least one non-transitory computer readable medium having instructions stored on it which, when executed by a computer, carry out the analyses of images captured by camera 102 and the adjustment to at least one video display pixel parameter value," paragraph [0031])to:
perform facial recognition on a frame of a plurality of frames of a video signal to determine whether there is a face of a subject in the frame ("Using known techniques, such as facial recognition techniques, the at least one pixel parameter value used to determine the size of the frame of border pixels 114b is at least one pixel parameter value for those pixels in the captured image which define subjects 107 and not for the pixels defining environment 112," paragraph [0032]);
generate a bounding box around the face in the frame in response to the facial recognition detecting the face in the frame, wherein the frame includes a plurality of pixels ("Using known techniques, such as facial recognition techniques, the at least one pixel parameter value used to determine the size of the frame of border pixels 114b is at least one pixel parameter value for those pixels in the captured image which define subjects 107 and not for the pixels defining environment 112," paragraph [0032] where the frame is a bounding box);
determine a pixel value of pixels included in the bounding box and an overall pixel value of the plurality of pixels ("Using known techniques, such as facial recognition techniques, the at least one pixel parameter value used to determine the size of the frame of border pixels 114b is at least one pixel parameter value for those pixels in the captured image which define subjects 107 and not for the pixels defining environment 112," paragraph [0032]);
perform a backlight adjustment of the frame ("the video display pixel luminance (Y) values in FIG. lD are increased relative to those of FIG. lA by increasing the LED backlight intensity (IBL)," paragraph [0031]) by:
selecting a lookup table (LUT) from a plurality of pre-generated LUTs based on the pixel value of the pixels in the bounding box and the overall pixel value of the plurality of pixels ("In accordance with the method, adjustments to at least one video display pixel parameter value are made based on the average captured pixel intensity Yavg max for captured subject pixels exclusive of the subject's surrounding environment," paragraph [0051] and "Adjusting the LED backlight intensity changes pixel luminance (Y) and can impact the emitted color from the pixels. Thus, in certain examples, the RGB values of the adjusted pixels are also adjusted to maintain the same color balance of red, green, and blue in the emitted colors and to maintain Yavg at Ysp· The adjustments are preferably based on the current and previous values of the backlight intensity (IBLn and IBLn-i, respectively), and the current RGB values" paragraph [0069] where average captured pixel intensity is the overall pixel value and pre-generated LUT is taught by adjusted pixels are also adjusted to maintain color balance);
setting an adjusted pixel value for each pixel of the plurality of the pixels included in the frame based on the selected LUT ("In accordance with the method, adjustments to at least one video display pixel parameter value are made based on the average captured pixel intensity Yavg max for captured subject pixels exclusive of the subject's surrounding environment," paragraph [0051] and "Adjusting the LED backlight intensity changes pixel luminance (Y) and can impact the emitted color from the pixels. Thus, in certain examples, the RGB values of the adjusted pixels are also adjusted to maintain the same color balance of red, green, and blue in the emitted colors and to maintain Yavg at Ysp· The adjustments are preferably based on the current and previous values of the backlight intensity (IBLn and IBLn-i, respectively), and the current RGB values" paragraph [0069] where average captured pixel intensity is the overall pixel value and pre-generated LUT is taught by adjusted pixels are also adjusted to maintain color balance); and
generate an output frame having the adjusted pixel values for each pixel ("The adjustment to the individual pixel luminance values can affect the emitted color as perceived by the viewer," paragraph [0080]).
Neumeier et al. do not explicitly teach all of blur values.
However, Gollanapalli et al. teach comparing the pixel value and a blur value ("the threshold weight of the feature map indicates one of: an overall blur weightage of the feature map and a minimum blur feature value to be present in the feature map; comparing each pixel value of the feature map with the threshold weight corresponding to the feature map," paragraph [0011]) of each pixel of the plurality of pixels to the selected LUT ("For example, at least one of these components, elements or units may use a direct circuit structure, such as a memory, a processor, a logic circuit, a look-up table, etc. that may execute the respective functions," paragraph [0110]); and
Neumeier et al. and Gollanapalli et al. are combined as per claim 1.
Claim 9
Regarding claim 9, Neumeier et al. teach the non-transitory storage medium of claim 8, including instructions to perform a low-light adjustment of the frame in response to the pixel value of the plurality of pixels being less than a threshold value ("If the average captured subject pixel luminance set point Ysp n is less than the maximum average pixel luminance Y avg max n in step 1056, control transfers to step 1062. In step 1062 the luminance values of each pixel in the picture field 106a are adjusted to achieve a value of Yavg n that is equal to Yw," paragraph [0074]).
Claim 10
Regarding claim 10, Neumeier et al. teach the non-transitory storage medium of claim 9, wherein performing the lowlight adjustment includes instructions to:
select a different LUT from the plurality of LUTs ("Adjusting the LED backlight intensity changes pixel luminance (Y) and can impact the emitted color from the pixels. Thus, in certain examples, the RGB values of the adjusted pixels are also adjusted to maintain the same color balance of red, green, and blue in the emitted colors and to maintain at The adjustments are preferably based Yavg Ysp·on the current and previous values of the backlight intensity and respectively), and the current RGB values (IBLn IBLn-I, respectively)," paragraph [0069] which shows a plurality of settings, which can be implemented as look-up tables as shown by Gollanapalli);
compare the pixel value and a blur value of each pixel of a plurality of pixels included in the frame to the different LUT ("Adjusting the LED backlight intensity changes pixel luminance (Y) and can impact the emitted color from the pixels. Thus, in certain examples, the RGB values of the adjusted pixels are also adjusted to maintain the same color balance of red, green, and blue in the emitted colors and to maintain at the adjustments are preferably based Yavg Ysp·on the current and previous values of the backlight intensity And respectively), and the current RGB values (IBLn IBLn-I, respectively)," paragraph [0069] which shows a plurality of settings, which can be implemented as look-up tables as shown by Gollanapalli); and
set a low-light adjusted pixel value for each pixel of the plurality of the pixels included in the frame based on the different LUT ("The adjustment to the individual pixel luminance values can affect the emitted color as perceived by the viewer," paragraph [0080]).
Claim 12
Regarding claim 12, Neumeier et al. teach a method, comprising:
performing, by a controller, facial recognition on a frame of a plurality of frames of a video signal to determine whether there is a face of a subject in the frame ("Using known techniques, such as facial recognition techniques, the at least one pixel parameter value used to determine the size of the frame of border pixels 114b is at least one pixel parameter value for those pixels in the captured image which define subjects 107 and not for the pixels defining environment 112," paragraph [0032]);
generating, by the controller, a bounding box around the face in the frame in response to the facial recognition detecting the face in the frame, wherein the frame includes a plurality of pixels ("Using known techniques, such as facial recognition techniques, the at least one pixel parameter value used to determine the size of the frame of border pixels 114b is at least one pixel parameter value for those pixels in the captured image which define subjects 107 and not for the pixels defining environment 112," paragraph [0032] where the frame is a bounding box);
determining, by the controller, a pixe1 value of pixels included in the bounding box and an overall pixel value of the plurality of pixels ("Using known techniques, such as facial recognition techniques, the at least one pixel parameter value used to determine the size of the frame of border pixels 114b is at least one pixel parameter value for those pixels in the captured image which define subjects 107 and not for the pixels defining environment 112," paragraph [0032]);
performing, by the controller, a backlight adjustment of the frame ("the video display pixel luminance (Y) values in FIG. lD are increased relative to those of FIG. lA by increasing the LED backlight intensity (IBL)," paragraph [0031]) by:
selecting a first lookup table (LUT) from a plurality of pre-generated LUTs based on the pixel value of the pixels in the bounding box and the overall pixel value of the plurality of pixels ("In accordance with the method, adjustments to at least one video display pixel parameter value are made based on the average captured pixel intensity Yavg max for captured subject pixels exclusive of the subject's surrounding environment," paragraph [0051] and "Adjusting the LED backlight intensity changes pixel luminance (Y) and can impact the emitted color from the pixels. Thus, in certain examples, the RGB values of the adjusted pixels are also adjusted to maintain the same color balance of red, green, and blue in the emitted colors and to maintain Yavg at Ysp· The adjustments are preferably based on the current and previous values of the backlight intensity (IBLn and IBLn-i, respectively), and the current RGB values" paragraph [0069] where average captured pixel intensity is the overall pixel value and pre-generated LUT is taught by adjusted pixels are also adjusted to maintain color balance);
setting an adjusted pixel value for each pixel of the plurality of the pixels included in the frame based on the first LUT ("In accordance with the method, adjustments to at least one video display pixel parameter value are made based on the average captured pixel intensity Yavg max for captured subject pixels exclusive of the subject's surrounding environment," paragraph [0051] and "Adjusting the LED backlight intensity changes pixel luminance (Y) and can impact the emitted color from the pixels. Thus, in certain examples, the RGB values of the adjusted pixels are also adjusted to maintain the same color balance of red, green, and blue in the emitted colors and to maintain Yavg at Ysp· The adjustments are preferably based on the current and previous values of the backlight intensity (IBLn and IBLn-i, respectively), and the current RGB values" paragraph [0069] where average captured pixel intensity is the overall pixel value and pre-generated LUT is taught by adjusted pixels are also adjusted to maintain color balance);
generating, by the controller, an output frame having the adjusted pixel values for each pixel ("The adjustment to the individual pixel luminance values can affect the emitted color as perceived by the viewer," paragraph [0080]); and
displaying, by a display, the output frame ("The adjustment to the individual pixel luminance values can affect the emitted color as perceived by the viewer," paragraph [0080]).
Neumeier et al. do not explicitly teach all of blur values.
However, Gollanapalli et al. teach comparing the pixel value and a blur value ("the threshold weight of the feature map indicates one of: an overall blur weightage of the feature map and a minimum blur feature value to be present in the feature map; comparing each pixel value of the feature map with the threshold weight corresponding to the feature map," paragraph [0011]) of each pixel of the plurality of pixels to the first LUT ("For example, at least one of these components, elements or units may use a direct circuit structure, such as a memory, a processor, a logic circuit, a look-up table, etc. that may execute the respective functions," paragraph [0110]).
Neumeier et al. and Gollanapalli et al. are combined as per claim 1.
2nd Claim Rejections - 35 USC § 103
Claim 5 is rejected under 35 U.S.C. 103 as obvious over US Patent Publication 2022 0270556 A1, (Neumeier et al.) and US Patent Publication 2021 0183020 A1, (Gollanapalli et al.) in view of US Patent 10,360,876 B1, (Rahman).
Claim 5
Regarding Claim 5, Neumeier et al., Gollanapalli et al. teach the controller of claim 4, as noted above.
[AltContent: textbox (Rahman, Fig. 7, showing checking if a human has moved.)]
PNG
media_image3.png
594
571
media_image3.png
Greyscale
Neumeier et al., Gollanapalli et al. do not explicitly teach all of facial recognition at a particular frequency.
However, Rahman teaches including instructions to cause the processing resource to perform the facial recognition at a particular frequency based on a predetermined number of frames of the video signal that are received ("the direction of the moving viewer's 306 constantly updating position along the path 324 is tracked and a location, size, etc. of the potential region 310 is updated at a particular frequency (e.g., in real-time, every 2 seconds, every 10 frames of 60 content, etc.)," Col. 9, Lines 56-61).
Therefore, taking the teachings of Neumeier et al., Gollanapalli et al. and Rahman as a whole, it would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to modify Systems and methods for Enhancing video conference, deblurring as taught by Neumeier et al., Gollanapalli et al. to use regular instances of facial recognition as taught by Rahman. The suggestion/motivation for doing so would have been that, “People in such a situation may choose to view content on their own computing device, such as a smart phone or tablet; however, people may also desire to have a single device that can be shared by multiple people in an efficient and communal manner.” as noted by the Rahman disclosure in Col. 1, Lines 12-16, which also motivates combination because the combination would predictably have a higher efficiency as there is a reasonable expectation that people will move and share a particular device; and/or because doing so merely combines prior art elements according to known methods to yield predictable results.
3rd Claim Rejections - 35 USC § 103
PNG
media_image4.png
502
291
media_image4.png
Greyscale
Claim 11 is rejected under 35 U.S.C. 103 as obvious over US Patent Publication 2022 0270556 A1, (Neumeier et al.) and US Patent Publication 2021 0183020 A1, (Gollanapalli et al.) in view of US Patent Publication 2015 0130958 A1, (Pavani).
Claim 11
Regarding Claim 11, Neumeier et al. and Gollanapalli et al. teach the non-transitory storage medium of claim 10, as noted above.
[AltContent: textbox (Pavani, Fig. 3, showing generating intermediate frames.)]Neumeier et al. and Gollanapalli et al. do not explicitly teach all of intermediate frames.
However, Pavani teaches including instructions to:
generate an intermediate frame having the low-light adjusted pixel values for each pixel of the plurality of pixels in the frame ("enhanced color imaging in a low light environment. The method includes: illuminating a first color, a second color, and a third color light at different time periods; capturing a first color image frame, a second color image frame and a third color image frame, response to the illumination; generating an intermediate color frame from each of the first, second and third color image frames; determining moving pixels in the intermediate color frame; determining a true color for the moving pixels in intermediate color frame," paragraph [0009]); and
perform the backlight adjustment on the intermediate frame ("enhanced color imaging in a low light environment. The method includes: illuminating a first color, a second color, and a third color light at different time periods; capturing a first color image frame, a second color image frame and a third color image frame, response to the illumination; generating an intermediate color frame from each of the first, second and third color image frames; determining moving pixels in the intermediate color frame; determining a true color for the moving pixels in intermediate color frame," paragraph [0009]).
Therefore, taking the teachings of Neumeier et al., Gollanapalli et al. and Pavani as a whole, it would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to modify Systems and methods for Enhancing video conference, deblurring as taught by Neumeier et al. and Gollanapalli et al. to use “a System and Method for Color Imaging under Low Light” as taught by Pavani. The suggestion/motivation for doing so would have been that, “Accordingly, there is a need for an enhanced video image processing technique that decreases noise and color artifacts, while minimizing motion blur and resolution degradation, without requiring a complex architecture, large memory, and/or high bandwidth.” as noted by the Pavani disclosure in paragraph [0005], which also motivates combination because the combination would predictably have a higher efficiency in low light conditions as there is a reasonable expectation that video teleconferencing would take place in less than full light conditions; and/or because doing so merely combines prior art elements according to known methods to yield predictable results.
4th Claim Rejections - 35 USC § 103
Claims 13-15 are rejected under 35 U.S.C. 103 as obvious over US Patent Publication 2022 0270556 A1, (Neumeier et al.) and US Patent Publication 2021 0183020 A1, (Gollanapalli et al.) in view of US Patent Publication 2008 0018571 A1, (Feng).
Claim 13
PNG
media_image5.png
373
525
media_image5.png
Greyscale
Regarding Claim 13, Neumeier et al. and Gollanapalli et al. teach the method of claim 12, as noted above.
Neumeier et al. and Gollanapalli et al. do not explicitly teach all of transition frames.
[AltContent: textbox (Feng et al. Fig. 15, showing using look up tables for transition frame processing.)]However, Feng teaches wherein the method further includes determining, by the controller, whether the frame is a transition frame ("A change from insignificant motion to significant motion ( or vice versa) the system may use a set of transition frames in order to avoid artifacts or other undesirable effects on the resulting image," paragraph [0060]).
Therefore, taking the teachings of Neumeier et al., Gollanapalli et al. and Feng as a whole, it would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to modify Systems and methods for Enhancing video conference, deblurring as taught by Neumeier et al. and Gollanapalli et al. to use transition frames as taught by Feng. The suggestion/motivation for doing so would have been that, “the use of LCD's in certain "high end markets," such as video and graphic arts, is frustrated, in part, by the limited performance of the display.” as noted by the Feng disclosure in paragraph [0008], which also motivates combination because the combination would predictably have a better transition between fames of a video as there is a reasonable expectation that transition frames would be used; and/or because doing so merely combines prior art elements according to known methods to yield predictable results.
Claim 14
Regarding claim 14, Neumeier et al. and Gollanapalli et al. teach the method of claim 13, as noted above.
Neumeier et al. and Gollanapalli et al. do not explicitly teach all of transition frames.
However, Feng teach wherein in response to the frame being a transition frame, the method includes utilizing the first LUT ("For example, at least one of these components, elements or units may use a direct circuit structure, such as a memory, a processor, a logic circuit, a look-up table, etc. that may execute the respective functions," paragraph [0110]).
Neumeier et al., Gollanapalli et al. and Feng are combined as per claim 13.
Claim 15
Regarding claim 15, Neumeier et al. teach the method of claim 13, as noted above.
Gollanapalli et al. teach the method includes:
determining, by the controller, whether the second LUT is a same LUT as the first LUT("A change from insignificant motion to significant motion ( or vice versa) the system may use a set of transition frames in order to avoid artifacts or other undesirable effects on the resulting image," paragraph [0060]);
utilizing, by the controller in response to the second LUT being the same LUT as the first LUT, the first LUT for the backlight adjustment("A change from insignificant motion to significant motion ( or vice versa) the system may use a set of transition frames in order to avoid artifacts or other undesirable effects on the resulting image," paragraph [0060]); and
in response to the second LUT being a different LUT from the first LUT("A change from insignificant motion to significant motion ( or vice versa) the system may use a set of transition frames in order to avoid artifacts or other undesirable effects on the resulting image," paragraph [0060]);
utilizing, by the controller, the second LUT for the backlight adjustment("A change from insignificant motion to significant motion ( or vice versa) the system may use a set of transition frames in order to avoid artifacts or other undesirable effects on the resulting image," paragraph [0060]); and
marking, by the controller, a next predetermined number of frames of the video signal to be received as transition frames("A change from insignificant motion to significant motion ( or vice versa) the system may use a set of transition frames in order to avoid artifacts or other undesirable effects on the resulting image," paragraph [0060]).
Neumeier et al. and Gollanapalli et al. do not explicitly teach all of Transition frames.
However, Feng teaches where in response to the frame not being a transition frame ("A change from insignificant motion to significant motion ( or vice versa) the system may use a set of transition frames in order to avoid artifacts or other undesirable effects on the resulting image," paragraph [0060]),
Neumeier et al., Gollanapalli et al. and Feng are combined as per claim 13.
Allowable Subject Matter
Claim 3 is objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Reference Cited
The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure.
US Patent Publication 2009 0322915 A1 to Cutler discloses exposure and/or gain for the selected region are automatically enhanced for improved video quality focusing on people or inanimate objects of interest.
US Patent 10,096,122 B1 to Agrawal et al. discloses the segmentation of object image data may comprise capturing image data comprising color data and depth data. In some examples, the segmentation of object image data may further include separating the depth data into a plurality of clusters of image data, wherein each cluster is associated with a respective range of depth values. In various examples, the segmentation of object image data may comprise selecting a main cluster of image data as corresponding to an object of interest in the image data. In various other examples, the segmentation of object image data may comprise identifying pixels of the main cluster that correspond to the object of interest.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to HEATH E WELLS whose telephone number is (703)756-4696. The examiner can normally be reached Monday-Friday 8:00-4:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ms. Jennifer Mehmood can be reached on 571-272-2976. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Heath E. Wells/Examiner, Art Unit 2664
Date: 4 February 2026